EuroSTAR Software Testing Conference 2013 presentation on With Cloud Computing Who Needs Performance Testing by Albert Witteveen.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
With Cloud Computing, Who Needs Performance Testing?TEST Huddle
With cloud computing we can add more hardware resources on the fly. Considering how expensive load and stress testing can be, why don't we just add more power when needed?
This presentation will explain why, especially for situations where cloud computing is available, load and stress testing often falls short but is still required. It will also show how the queuing theory can provide a different approach which allows load and stress testers to add real value. Stakeholders and test managers can use the same theory to get a handle on the coverage and depth of the tests.
Key Takeaways:
- Why performance testing so often fails to accomplish what we want
- Why relying on cloud computing alone is not enough
- How the queuing theory can provide a different approach to performance testing
- How the queuing theory can help you understand if the performance tests
www.eurostarconferences.com
www.testhuddle.com
The document discusses the importance of performance testing systems and identifying queuing centers to understand bottlenecks. Some key points:
1. Performance tests often fail to accurately simulate real-world loads, leading to underestimating needed hardware on production.
2. Queuing theory can be used to model systems and identify queuing centers where waiting occurs. These centers determine performance and scalability.
3. Identifying queuing centers through testing and monitoring helps assess if systems can meet requirements, where risks lie, and how to scale effectively to improve response times.
#ATAGTR2021 Presentation : "Chaos engineering: Break it to make it" by Anupa...Agile Testing Alliance
Interactive Session on "Chaos engineering: Break it to make it" by Anupam Agarwal,Nagarro, Peeyush Girdhar, Cloud / DevOps Nagarro. at #ATAGTR2021.
#ATAGTR2021 was the 6th Edition of Global Testing Retreat.
The video recording of the session is now available on the following link: https://www.youtube.com/watch?v=4bM4f8xNp2A
To know more about #ATAGTR2021, please visit:https://gtr.agiletestingalliance.org/
The document discusses how distributed systems are designed to handle failures without downtime or significant performance degradation. It describes how systems can be designed using techniques like CRDTs to allow nodes to fail or recover without impacting users. The document also suggests that with large datasets from high-volume systems, statistical models can be used to better monitor system performance, detect outliers, and predict future conditions rather than just settling for slow or inconsistent responses.
This document summarizes the topics covered in the QTP Training Session V, including:
- Conditional statements like If/Then/Else and Case statements for controlling script flow.
- Error handling using recovery scenarios and On Error Resume Next.
- Inserting transaction points to measure execution time.
- Scheduling script execution using a .vbs file and the Windows Task Scheduler.
- Best practices for writing scripts that can be executed by others, like including instructions, comments, and readable formatting.
Site Reliability Engineering (SRE) - Tech Talk by Keet SugathadasaKeet Sugathadasa
When it comes to Site Reliability Engineering, short for SRE, the resources available online are only limited to the books published by Google themselves. They do share some useful case studies that will help us understand what SRE is, and how to understand the concepts given in it, but they do not clearly explain how to build your own SRE team for your organization. The concept of SRE was cooked fresh within the walls of Google and later released to the general public as a practice for anyone to follow.
In this presentation I would like to give a brief introduction to SRE and why it is important to any Software Engineering organization. This is based on my experiences and learnings from leading a Site Reliability Engineering team for leading organizations in the US and Norway.
This presentation was conducted by me as a Tech Talk as an Associate Technical Lead at Creative Software Sri Lanka.
Testability can make our testing lives so much better. But we need to sell it to those who can pay for the changes needed. Find out what they need (delivery, flow, stability, resilience), how it can be measured the use the handy examples below!
With Cloud Computing, Who Needs Performance Testing?TEST Huddle
With cloud computing we can add more hardware resources on the fly. Considering how expensive load and stress testing can be, why don't we just add more power when needed?
This presentation will explain why, especially for situations where cloud computing is available, load and stress testing often falls short but is still required. It will also show how the queuing theory can provide a different approach which allows load and stress testers to add real value. Stakeholders and test managers can use the same theory to get a handle on the coverage and depth of the tests.
Key Takeaways:
- Why performance testing so often fails to accomplish what we want
- Why relying on cloud computing alone is not enough
- How the queuing theory can provide a different approach to performance testing
- How the queuing theory can help you understand if the performance tests
www.eurostarconferences.com
www.testhuddle.com
The document discusses the importance of performance testing systems and identifying queuing centers to understand bottlenecks. Some key points:
1. Performance tests often fail to accurately simulate real-world loads, leading to underestimating needed hardware on production.
2. Queuing theory can be used to model systems and identify queuing centers where waiting occurs. These centers determine performance and scalability.
3. Identifying queuing centers through testing and monitoring helps assess if systems can meet requirements, where risks lie, and how to scale effectively to improve response times.
#ATAGTR2021 Presentation : "Chaos engineering: Break it to make it" by Anupa...Agile Testing Alliance
Interactive Session on "Chaos engineering: Break it to make it" by Anupam Agarwal,Nagarro, Peeyush Girdhar, Cloud / DevOps Nagarro. at #ATAGTR2021.
#ATAGTR2021 was the 6th Edition of Global Testing Retreat.
The video recording of the session is now available on the following link: https://www.youtube.com/watch?v=4bM4f8xNp2A
To know more about #ATAGTR2021, please visit:https://gtr.agiletestingalliance.org/
The document discusses how distributed systems are designed to handle failures without downtime or significant performance degradation. It describes how systems can be designed using techniques like CRDTs to allow nodes to fail or recover without impacting users. The document also suggests that with large datasets from high-volume systems, statistical models can be used to better monitor system performance, detect outliers, and predict future conditions rather than just settling for slow or inconsistent responses.
This document summarizes the topics covered in the QTP Training Session V, including:
- Conditional statements like If/Then/Else and Case statements for controlling script flow.
- Error handling using recovery scenarios and On Error Resume Next.
- Inserting transaction points to measure execution time.
- Scheduling script execution using a .vbs file and the Windows Task Scheduler.
- Best practices for writing scripts that can be executed by others, like including instructions, comments, and readable formatting.
Site Reliability Engineering (SRE) - Tech Talk by Keet SugathadasaKeet Sugathadasa
When it comes to Site Reliability Engineering, short for SRE, the resources available online are only limited to the books published by Google themselves. They do share some useful case studies that will help us understand what SRE is, and how to understand the concepts given in it, but they do not clearly explain how to build your own SRE team for your organization. The concept of SRE was cooked fresh within the walls of Google and later released to the general public as a practice for anyone to follow.
In this presentation I would like to give a brief introduction to SRE and why it is important to any Software Engineering organization. This is based on my experiences and learnings from leading a Site Reliability Engineering team for leading organizations in the US and Norway.
This presentation was conducted by me as a Tech Talk as an Associate Technical Lead at Creative Software Sri Lanka.
Testability can make our testing lives so much better. But we need to sell it to those who can pay for the changes needed. Find out what they need (delivery, flow, stability, resilience), how it can be measured the use the handy examples below!
Advanced A/B Testing at Wix - Aviran Mordo and Sagy Rozman, Wix.comDevOpsDays Tel Aviv
While A/B test is a very known and familiar methodology for conducting experiments on production when you do that on a large scale it has many challenges in the organization level and operational level.
At Wix we are practicing continuous delivery for over 4 years. Conducting A/B tests and writing feature toggles is at the core of our development process. However when doing so on a large scale, with over 1000 experiments every month, it holds many challenges and affect everyone in the company, from developers, product managers, QA, marketing and management.
In this talk we will explain what is the lifecycle of an experiment, some of the challenges we faced and the effect on our development process.
* How an experiment begins its life
* How an experiment is defined
* How do you let non technical people control the experiment while preventing mistakes
* How an experiment go live, what is the lifecycle of an experiment from beginning to end
* What is the difference between client and server experiments
* How do you keep the user experience and not confuse them
* How does it affect the development process
* How can QA test an environment that changes every 9 minutes
* How can support help users when every user may be part of different experiment
* How can we find if an experiment is causing errors when you have millions of permutations [at least 2^(number of active experiments)]
* What are the effects of always having multiple experiments on system architecture
* What are the development patterns when working with AB test
At Wix we have developed our 3rd generation experiment system called PETRI, which is (will be) open sourced, that helps us maintain some order in a chaotic system that keep changing. We will also explain how PETRI works, what are the patterns in conducting experiments that will have a minimal effect on performance and user experience.
Why we need software testing?
- Find / prevent bug
- No more nightmare when you deploy to production
- Preserve product quality
End to End Test
- Testing the complete functionality of some application
- Real application
Why average response time is not a right measure of your webapplication's per...Thoughtworks
This document discusses the limitations of using average response time to measure web application performance and introduces Apdex as a better metric. It explains that average response time can hide outliers, does not indicate the number of users affected by slow responses, and can give misleading impressions when a small number of requests experience long response times. The document provides examples to illustrate these limitations and shows how Apdex addresses them by accounting for satisfied, tolerating, and unacceptable response times to give a more accurate picture of user experience. It promotes using Apdex instead of average response time to correctly evaluate performance and identify issues impacting users.
This document discusses techniques and tools for having coherent discussions about performance in complex systems. It emphasizes making performance relevant and important, developing a performance culture focused on small wins, and using consistent terminology. The document also describes Dapper, Google's distributed tracing infrastructure, how it works, and examples of its use. It advocates for moving away from Thrift and Scribe in favor of other open source alternatives like Zipkin and libmtev that provide distributed tracing functionality with better performance.
SRE stands for Site Reliability Engineering. It originated at Google over a decade ago as a way to ensure their products and services were highly reliable. SRE implements DevOps principles through components like reliability, service level agreements (SLAs), service level objectives (SLOs), service level indicators (SLIs), and error budgets. Reliability is measured through SLOs and SLIs to quantify user experience. Error budgets allow teams to balance new features against reliability by quantifying how much downtime is acceptable. SRE aims to reduce "toil", or unnecessary repetitive manual work, through automation.
JDD 2016 - Jedrzej Dabrowa - Distributed System Fault Injection Testing With ...PROIDEA
Having more than a hundred loosely coupled microservices leads to a big challenge when it comes to resiliency testing. In a probabilistic system a failure is inevitable. With a help of Docker and the environment around we've built a framework which allowed us to test core components of Base for network issues, partitions, etc. Learn how you can build it and sleep well without system outages.
Rajesh Mathur - Testing in a Challenging Environment - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing in a Challenging Environment by Rajesh Mathur.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Graham Freeburn - What Makes a Good Tester - EuroSTAR 2013TEST Huddle
This document outlines many qualities that make a good tester, including strong analytical and critical thinking skills, attention to detail, effective communication skills, adaptability, creativity in test design, ongoing learning, and a passion for quality. It discusses testers needing intelligence, curiosity, problem-solving abilities, and technical expertise as well as virtues like courage, persistence, and empathy. The document is meant to start a discussion on defining what makes a good tester and get feedback to improve the model.
Alexandra Casapu - Fooled by Unknown Unknowns, A Success Story - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Fooled by Unknown Unknowns, A Success Story by Alexandra Casapu.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Pradeep Soundararajan - Testing for Sales and Competitor Analysis - EuroSTAR ...TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing for Sales and Competitor Analysis by Pradeeb Soundararajan.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Jackie McDougall - Testing on Trial - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing on Trial by Jackie McDougall.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Jouri Dufour - How About Security Testing - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on How About Security Testing by Jouri Dufour.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Paul Holland - How To Organise a Peer Conference - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on How To Organise a Peer Conference by Paul Holland.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Zeger Van Hese - Testing in the Age of Distraction, The Importance of (De)foc...TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing in the Age of Distraction, The Importance of (De)focus Testing by Zeger Van Hese.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Soren Lynggaard & Pusser Janvit - How To Hire A True Tester - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on How to Hire a True Tester by Soren Lynggaard & Pusser Janvit .
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Tony Bruce - One More question.... - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on One More Question.... by Tony Bruce.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Andy Glover - Testing is evolving, but where is the evidence - EuroSTAR 2012TEST Huddle
The document discusses how software testing is evolving and the need for evidence in highly regulated environments. It notes that testers' mindsets need to change from just passing tests to documentation and validation. Both informal and formal testing processes are discussed, and it is suggested to manage informal testing through sessions and add more variety like automation and collaboration. The challenges of transitioning to agile methodologies are also presented. Overall, it argues for a balanced, evidence-based approach to testing through various techniques and an emphasis on continuous learning.
Markus Gartner - Beyond Testing - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Beyond Testing by Markus Gartner. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
EuroSTAR Software Testing Conference 2013 presentation on Readable, Executable Requirements: Hands-On by Emily Bache.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Alexandra Schladebeck - What Agile Teams Can Learn From World of Warcraft - E...TEST Huddle
This document discusses lessons that agile teams can learn from the popular online game World of Warcraft (WoW). It provides an overview of WoW, describing how characters are created with different races, classes, skills and equipment. It then outlines parallels between WoW gameplay and agile practices, such as assigning roles, forming collaborative teams, breaking work into granular tasks, and continually improving skills over time. Finally, it proposes several specific lessons for agile teams, such as making help easier to access, providing rewards for assistance, fostering trust and shared goals within teams.
Iain McCowatt - Automation Time to Change Our Models - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Automation Time to Change Our Models by Iain McCowatt.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Advanced A/B Testing at Wix - Aviran Mordo and Sagy Rozman, Wix.comDevOpsDays Tel Aviv
While A/B test is a very known and familiar methodology for conducting experiments on production when you do that on a large scale it has many challenges in the organization level and operational level.
At Wix we are practicing continuous delivery for over 4 years. Conducting A/B tests and writing feature toggles is at the core of our development process. However when doing so on a large scale, with over 1000 experiments every month, it holds many challenges and affect everyone in the company, from developers, product managers, QA, marketing and management.
In this talk we will explain what is the lifecycle of an experiment, some of the challenges we faced and the effect on our development process.
* How an experiment begins its life
* How an experiment is defined
* How do you let non technical people control the experiment while preventing mistakes
* How an experiment go live, what is the lifecycle of an experiment from beginning to end
* What is the difference between client and server experiments
* How do you keep the user experience and not confuse them
* How does it affect the development process
* How can QA test an environment that changes every 9 minutes
* How can support help users when every user may be part of different experiment
* How can we find if an experiment is causing errors when you have millions of permutations [at least 2^(number of active experiments)]
* What are the effects of always having multiple experiments on system architecture
* What are the development patterns when working with AB test
At Wix we have developed our 3rd generation experiment system called PETRI, which is (will be) open sourced, that helps us maintain some order in a chaotic system that keep changing. We will also explain how PETRI works, what are the patterns in conducting experiments that will have a minimal effect on performance and user experience.
Why we need software testing?
- Find / prevent bug
- No more nightmare when you deploy to production
- Preserve product quality
End to End Test
- Testing the complete functionality of some application
- Real application
Why average response time is not a right measure of your webapplication's per...Thoughtworks
This document discusses the limitations of using average response time to measure web application performance and introduces Apdex as a better metric. It explains that average response time can hide outliers, does not indicate the number of users affected by slow responses, and can give misleading impressions when a small number of requests experience long response times. The document provides examples to illustrate these limitations and shows how Apdex addresses them by accounting for satisfied, tolerating, and unacceptable response times to give a more accurate picture of user experience. It promotes using Apdex instead of average response time to correctly evaluate performance and identify issues impacting users.
This document discusses techniques and tools for having coherent discussions about performance in complex systems. It emphasizes making performance relevant and important, developing a performance culture focused on small wins, and using consistent terminology. The document also describes Dapper, Google's distributed tracing infrastructure, how it works, and examples of its use. It advocates for moving away from Thrift and Scribe in favor of other open source alternatives like Zipkin and libmtev that provide distributed tracing functionality with better performance.
SRE stands for Site Reliability Engineering. It originated at Google over a decade ago as a way to ensure their products and services were highly reliable. SRE implements DevOps principles through components like reliability, service level agreements (SLAs), service level objectives (SLOs), service level indicators (SLIs), and error budgets. Reliability is measured through SLOs and SLIs to quantify user experience. Error budgets allow teams to balance new features against reliability by quantifying how much downtime is acceptable. SRE aims to reduce "toil", or unnecessary repetitive manual work, through automation.
JDD 2016 - Jedrzej Dabrowa - Distributed System Fault Injection Testing With ...PROIDEA
Having more than a hundred loosely coupled microservices leads to a big challenge when it comes to resiliency testing. In a probabilistic system a failure is inevitable. With a help of Docker and the environment around we've built a framework which allowed us to test core components of Base for network issues, partitions, etc. Learn how you can build it and sleep well without system outages.
Rajesh Mathur - Testing in a Challenging Environment - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing in a Challenging Environment by Rajesh Mathur.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Graham Freeburn - What Makes a Good Tester - EuroSTAR 2013TEST Huddle
This document outlines many qualities that make a good tester, including strong analytical and critical thinking skills, attention to detail, effective communication skills, adaptability, creativity in test design, ongoing learning, and a passion for quality. It discusses testers needing intelligence, curiosity, problem-solving abilities, and technical expertise as well as virtues like courage, persistence, and empathy. The document is meant to start a discussion on defining what makes a good tester and get feedback to improve the model.
Alexandra Casapu - Fooled by Unknown Unknowns, A Success Story - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Fooled by Unknown Unknowns, A Success Story by Alexandra Casapu.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Pradeep Soundararajan - Testing for Sales and Competitor Analysis - EuroSTAR ...TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing for Sales and Competitor Analysis by Pradeeb Soundararajan.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Jackie McDougall - Testing on Trial - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing on Trial by Jackie McDougall.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Jouri Dufour - How About Security Testing - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on How About Security Testing by Jouri Dufour.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Paul Holland - How To Organise a Peer Conference - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on How To Organise a Peer Conference by Paul Holland.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Zeger Van Hese - Testing in the Age of Distraction, The Importance of (De)foc...TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Testing in the Age of Distraction, The Importance of (De)focus Testing by Zeger Van Hese.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Soren Lynggaard & Pusser Janvit - How To Hire A True Tester - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on How to Hire a True Tester by Soren Lynggaard & Pusser Janvit .
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Tony Bruce - One More question.... - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on One More Question.... by Tony Bruce.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Andy Glover - Testing is evolving, but where is the evidence - EuroSTAR 2012TEST Huddle
The document discusses how software testing is evolving and the need for evidence in highly regulated environments. It notes that testers' mindsets need to change from just passing tests to documentation and validation. Both informal and formal testing processes are discussed, and it is suggested to manage informal testing through sessions and add more variety like automation and collaboration. The challenges of transitioning to agile methodologies are also presented. Overall, it argues for a balanced, evidence-based approach to testing through various techniques and an emphasis on continuous learning.
Markus Gartner - Beyond Testing - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Beyond Testing by Markus Gartner. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
EuroSTAR Software Testing Conference 2013 presentation on Readable, Executable Requirements: Hands-On by Emily Bache.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Alexandra Schladebeck - What Agile Teams Can Learn From World of Warcraft - E...TEST Huddle
This document discusses lessons that agile teams can learn from the popular online game World of Warcraft (WoW). It provides an overview of WoW, describing how characters are created with different races, classes, skills and equipment. It then outlines parallels between WoW gameplay and agile practices, such as assigning roles, forming collaborative teams, breaking work into granular tasks, and continually improving skills over time. Finally, it proposes several specific lessons for agile teams, such as making help easier to access, providing rewards for assistance, fostering trust and shared goals within teams.
Iain McCowatt - Automation Time to Change Our Models - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Automation Time to Change Our Models by Iain McCowatt.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Bob Harnisch & Tim Koomen - Mixing Waterfall, Agile & Outsourcing at Dutch Ra...TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on Mixing Waterfall, Agile & Outsourcing at Dutch Rail by Bob Harnisch.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Remi Hansen - Test Strategies Are 90% Waste - EuroSTAR 2013TEST Huddle
- The document discusses anti-patterns in test strategies that waste time, such as overly long documents following templates exactly and writing for the wrong audience.
- It recommends that test strategies be concise and focus on communicating the most important choices to management to gain support, rather than documenting all details.
- Key elements to include on just a few slides are the objectives, types of testing, roles, and resources needed; more details belong in test plans rather than strategies.
Jeanne Hofmans & Eduard Hartog - How to Test a Tunnel - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on How to Test a Tunnel by Jeanne Hofmans & Eduard Hartog.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Geoff & Emily Bache - Specification By Example With GUI Tests-How Could That ...TEST Huddle
This document discusses using specification by example (SBE) to test rich client GUI applications. It describes using a tool called TextTest that allows writing tests using a domain language and automatically records the GUI interactions and assertions. Tests in TextTest have two parts - a use case section describing actions in domain language terms, and an automatically generated GUI log section capturing screen contents. This allows testing applications by their specifications before code is written and preserves requirements as living documentation through automated regression tests.
Julian Harty - Open Sourcing Testing - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Open Sourcing Testing by Julian Harty. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
DevOps aims to bridge the gap between development and operations by fostering collaboration. Key aspects of DevOps include establishing a collaborative culture through open communication and engagement between teams, automating processes like builds, deployments, testing and system configuration, and implementing monitoring of applications and infrastructure through metrics and logging to ensure stability and enable issues to be quickly identified and addressed. Tools like Puppet, Munin, Graphite, Logstash and Graylog can help operationalize these aspects of DevOps.
Prometheus is a next-generation monitoring system. It lets you see you not just what your systems look like from the outside, but also gives visibility into the internals and business aspects of your systems. This allows everyone to benefit, including both operations and developers. This talk will look at the concepts behind monitoring with Prometheus, how it's designed, why it's suitable for Cloud Native environments and how you can get involved.
Eric Proegler Early Performance Testing from CAST2014Eric Proegler
Development and deployment contexts have changed considerably over the last decade. The discipline of performance testing has had difficulty keeping up with modern testing principles and software development and deployment processes.
Most people still see performance testing as a single experiment, run against a completely assembled, code-frozen, production-resourced system, with the "accuracy" of simulation and environment considered critical to the value of the data the test provides.
But what can we do to provide actionable and timely information about performance and reliability when the software is not complete, when the system is not yet assembled, or when the software will be deployed in more than one environment?
Eric deconstructs “realism” in performance simulation, talks about performance testing more cheaply to test more often, and suggest strategies and techniques to get there. He will share findings from WOPR22, where performance testers from around the world came together in May 2014 to discuss this theme in a peer workshop.
This document provides an overview of building cloud-ready applications in .NET. It defines what makes an application cloud-ready, discusses common issues with legacy applications, and recommends design patterns and practices to address these issues, including loose coupling, high cohesion, messaging, service discovery, API gateways, and resiliency policies. It includes code examples and links to additional resources.
An Introduction to Prometheus (GrafanaCon 2016)Brian Brazil
Often what you monitor and get alerted on is defined by your tools, rather than what makes the most sense to you and your organisation. Alerts on metrics such as CPU usage which are noisy and rarely spot real problems, while outages go undetected. Monitoring systems can also be challenging to maintain, and overall provide a poor return on investment.
In the past few years several new monitoring systems have appeared with more powerful semantics and which are easier to run, which offer a way to vastly improve how your organisation operates and prepare you for a Cloud Native environment. Prometheus is one such system. This talk will look at the monitoring ideal and how whitebox monitoring with a time series database, multi-dimensional labels and a powerful querying/alerting language can free you from midnight pages.
Scaling for Success: Lessons from handling peak loads on Azure with NServiceBusParticular Software
What happens when 200k users unexpectedly decide to use your platform simultaneously? We’re using autoscale on Azure PaaS so surely we can handle that, right? Wrong! Ask me how I found out… After going through a bit of trouble, I want to help you avoid the same mistakes I made.
Lessons from DevOps: Taking DevOps practices into your AppSec LifeMatt Tesauro
Bruce Lee once said “Don’t get set into one form, adapt it and build your own, and let it grow, be like water“.
AppSec needs to look beyond itself for answers to solving problems since we live in a world of every increasing numbers of apps. Technology and apps have invaded our lives, so how to you lead a security counter-insurgency? One way is to look at the key tenants of DevOps and apply those that make sense to your approach to AppSec. Something has to change as the application landscape is already changing around us.
Sql azure cluster dashboard public.pptQingsong Yao
This document discusses building a centralized dashboard to monitor SQL Azure clusters in real-time. Key points:
- The goal was to provide a single place to view cluster status and detect issues early through telemetry data analysis.
- Lessons included choosing efficient data techniques, building resilience into the data pipeline, and monitoring pipeline performance.
- The dashboard helped transition monitoring from reactive to proactive by enabling new alert detection based on real-time trend analysis across clusters.
The document summarizes a performance analysis study of a large enterprise application called Zanzibar. The researchers initially struggled to get the application to scale even on modest hardware. Through analyzing javacores, they identified and addressed multiple bottlenecks, including lock contention, disk throughput issues, and inefficient Java code. These changes resulted in performance improvements of 1.4-5x. The researchers also developed the WAIT performance tool to help identify primary bottlenecks in deployed applications with low overhead. The tool uses standard OS and JVM data and has seen widespread adoption due to its ease of use.
Continuous Deployment involves shipping code as frequently as possible, even multiple times per day. It allows for smaller changes with less risk, faster feedback, and a competitive advantage. To achieve this, companies optimize their deployment process, automate testing and deployments, and measure everything to learn and improve continuously. This approach is enabled by technologies like cloud computing and embraced by companies like Google, Amazon, and Facebook.
This document provides an introduction to chaos engineering, including:
- Defining chaos engineering as experimenting on distributed systems to build confidence in withstanding turbulent conditions.
- Outlining the brief history of chaos engineering from 2010-2018.
- Describing the methodology which involves forming hypotheses, testing ideas through experiments, analyzing results, and repeating.
- Explaining how to start chaos engineering "in the wild" through basic steps and increasing levels of experimentation.
- Highlighting valuable outcomes like avoiding downtime and increasing productivity.
- Addressing common myths around chaos engineering.
- Providing additional resources for learning more.
Provisioning and Capacity Planning Workshop (Dogpatch Labs, September 2015)Brian Brazil
Brian Brazil, an engineer passionate about running software reliably in production, gave a workshop on provisioning and capacity planning. He taught attendees how to estimate spare capacity and runway by measuring the bottleneck resource, calculating utilization, and determining peak traffic. Brian also covered how to provision new machines based on queries per second per machine. While acknowledging real-world complexities, he emphasized the importance of monitoring for making operational decisions.
Integration and Systems Test.DS_Store__MACOSXIntegration a.docxnormanibarber20063
Integration and Systems Test/.DS_Store
__MACOSX/Integration and Systems Test/._.DS_Store
Integration and Systems Test/Assignment Instructions.docx
Assignment Integration and System Tests
In this assignment you will develop an iteration test plan for the CPPS Case Study.
Instructions
Complete the following for the CPPS Case Study and use a Word file.
1. Develop an iteration test plan (one that applies to and can be used within a subsystem iteration mini-project).
2. Discuss which types of testing (as identified in Chapter 13) you would include and why.
3. Estimate how much time will be needed for each type of test.
4. Discuss what types of testing might be combined or scheduled with an overlap.
Submission Instructions
1. Submit your assignment in Word file and name it like LastNameFirstNameAssignment.
2. Make certain that you include the above questions with the answers in your document. Clearly identify the questions and answers.
3. Include your name and assignment number at the top of your Word document.
4. Insert any graphics into your Word document. Do not submit graphics separately.
Your assignment will be graded with the following rubric:
Rubric for Assignments
Points
Content & Development 50%
50/50
Organization 20%
20/20
Format 10%
10/10
Grammar, Punctuation, & Spelling 15%
15/15
Readability & Style 5%
5/5
Timeliness (late deduction 10 points) Optional
Total
100/100
__MACOSX/Integration and Systems Test/._Assignment Instructions.docx
Integration and Systems Test/Chapter 13.pdf
13 Making the System Operational
Chapter Outline
▪ Testing
▪ Deployment Activities
▪ Planning and Managing Implementation, Testing, and Deployment
▪ Putting It All Together—RMO Revisited
Learning Objectives
After reading this chapter, you should be able to:
▪ Describe implementation and deployment activities
▪ Describe various types of software tests and explain how and why each is used
▪ Explain the importance of configuration management, change management, and source code control to the
implementation, testing, and deployment of a system
▪ List various approaches to data conversion and system deployment and describe the advantages and
disadvantages of each
▪ Describe training and user support requirements for new and operational systems
OPENING CASE : Tri-State Heating Oil: Juggling Priorities to Begin
Operation
It was 8:30 on Monday morning, and Maria Grasso, Kim Song, Dave Williams, and Rajiv Gupta were about to begin
the weekly project status meeting. Tri-State Heating Oil had started developing a new scheduling system for customer
orders and service calls five months earlier. The target completion date was 10 weeks away, but the project was
behind schedule. Early project iterations had accomplished far less than anticipated because key users had disagreed
on what new system requirements to include and the system scope was larger than expected.
Maria began the meeting. “We've gained a day or two since our last.
Test execution is the process of executing the code and comparing the expected and actual results. Following factors need to be considered for a test execution process − Based on a risk, select a subset of test suite to be executed for this cycle. Assign the test cases in each test suite to testers for execution.
Performance doesn’t have the same definition between system administrators, developpers and business teams. What is Performance ? High CPU usage, not scalable web site, low business transaction rate per sec, slow response time, … This presentation is about maths, code performance, load testing, web performance, best practices, … Working on performance optimizaton is a very broad topic. It’s important to really understand main concepts and to have a clean and strong methodology because it could be a very time consumming activity. Happy reading !
You are already the Duke of DevOps: you have a master in CI/CD, some feature teams including ops skills, your TTM rocks ! But you have some difficulties to scale it. You have some quality issues, Qos at risk. You are quick to adopt practices that: increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: how much confidence we can have in the complex systems that we put into production? Let’s talk about the next hype of DevOps: SRE, error budget, continuous quality, observability, Chaos Engineering.
Application Performance Troubleshooting 1x1 - Part 2 - Noch mehr Schweine und...rschuppe
Application Performance doesn't come easy. How to find the root cause of performance issues in modern and complex applications? All you have is a complaining user to start with?
In this presentation (mainly in German, but understandable for english speakers) I'd reprised the fundamentals of trouble shooting and have some new examples on how to tackle issues.
Follow up presentation to "Performance Trouble Shooting 101 - Schweine, Schlangen und Papierschnitte"
S.R.E - create ultra-scalable and highly reliable systemsRicardo Amaro
Site Reliability Engineering enables agility and stability.
SREs use Software Engineering to automate themselves out of the Job.
My advice, if you want to implement this change in your company is to start with action items, alter your training and hiring, implement error budgets, do blameless postmortems and reduce toil.
https://events.drupal.org/dublin2016/sessions/sre-create-ultra-scalable-and-highly-reliable-systems
Your data is in Prometheus, now what? (CurrencyFair Engineering Meetup, 2016)Brian Brazil
Prometheus is a next-generation monitoring system with a time series database at it's core. Once you have a time series database, what do you do with it though? This talk will look at getting data in, and more importantly how to use the data you collect productively.
Contact us at prometheus@robustperception.io
Similar to Albert Witteveen - With Cloud Computing Who Needs Performance Testing (20)
Why We Need Diversity in Testing- AccentureTEST Huddle
In this webinar Rasa (Testing capability lead for Denmark) and Matthias (EALA Testing capability lead) will share some of their own experiences why diversity matters, give insights into how Accenture as a global firm is promoting diversity and how we are in the process of changing our attitudes and processes to make all of this sustainable
Keys to continuous testing for faster delivery euro star webinar TEST Huddle
Your business needs to deliver faster. To accommodate, Development needs to introduce fewer changes but in a much more frequent cadence. This creates a challenge for test teams to keep up with the rapid pace of change without compromising on quality. Automation is paramount to the success or failure of Continuous Delivery, and Continuous Testing enables early and frequent quality feedback throughout the CI/CD pipeline.
In this webinar, Eran & Ayal will explore how to implement Continuous Testing to ensure high quality releases in a Continuous Delivery environment; including what to test and when to automate new functionality in order to optimize your efforts.
Why you Shouldnt Automated But You Will Anyway TEST Huddle
The document discusses automation in software testing. It begins by outlining common claims made about the benefits of automation, such as saving time and improving quality, but argues that these claims often don't hold true. Automation does not inherently save time, guarantee quality, or reduce resources needed. It also does not always save money when development, maintenance, and infrastructure costs are considered. The document provides a formula for determining when automation is worthwhile based on how many times a test case would need to be rerun manually. It concludes by acknowledging that, despite these drawbacks, organizations will still automate testing because it is exciting, managers demand it, and it benefits careers.
In this webinar Carsten will explore the role of the tester in a Scrum team. He will examine where the tester play an important role in Scrum and how you can contribute to a teams performance.
Leveraging Visual Testing with Your Functional TestsTEST Huddle
Designing and implementing (or selecting) the right automation strategy, for functional testing, with visual testing, can help your project with greater test coverage while improving test scalability
Big Data: The Magic to Attain New HeightsTEST Huddle
This document discusses how big data and data science can be used to attain new heights, likening it to magic. It provides an overview of Ken Johnston's background and experiences in data science. It then discusses six keys to a "big" magic show with big data: trying multiple times, addressing issues with over-counting, experimentation techniques like A/B testing, infrastructure for big data, tools and skills, and security, privacy and fraud protection. The document emphasizes the importance of an assistant to help the data scientist or data engineer with various tasks.
This talk suggests how we might make sense of the tools landscape of the near future, where the pressure to modernise processes and automate is greatest, and what a new test process supported by tools might look like.
Takeaways:
- We need to take machine learning in testing seriously, but it won’t be taking our jobs just yet
- We don’t need more test automation tools; today we need tools that capture tester knowledge
- Tools that that learn and think can’t work for testers until we solve the knowledge capture challenge.
View On-Demand Webinar: https://youtu.be/EzyUdJFuzlE
The document discusses Test Driven Development (TDD) and Test Driven Design. It uses the analogy of building a lightsaber and later a Death Star to illustrate the TDD process and benefits. Some benefits mentioned are better test coverage, less debugging, and better design. The document provides tips for practicing TDD including planning ahead, defining boundaries, taking small steps to pass each test, and maintaining discipline. It emphasizes trying TDD in a team and considering Behavior Driven Development (BDD) as well.
Scaling Agile with LeSS (Large Scale Scrum)TEST Huddle
In this webinar, Elad will cover the principles that the #LeSS framework has to offer in order to enable bug organisations to become agile.
View webinar recording - https://huddle.eurostarsoftwaretesting.com/resource/agile-testing/scaling-agile-less-large-scale-scrum/
Creating Agile Test Strategies for Larger EnterprisesTEST Huddle
Having difficulty creating an agile test strategy for your company? Let Testing Excellence Award winner, Derk-Jan de Grood, show you how it’s done
View webinar recording here - http://huddle.eurostarsoftwaretesting.com/resource/agile-testing/creating-agile-test-strategies-larger-enterprises/
3 key takeaways
- Do you know the meaning of your organisation, system, product?
- Can you deliver the important risks right away?
- How can you communicate about the (process and product) risks your dealing with?
View Webinar recording: https://huddle.eurostarsoftwaretesting.com/resource/test-management/is-there-a-risk/
Are Your Tests Well-Travelled? Thoughts About Test CoverageTEST Huddle
This document summarizes a presentation on test coverage given by Dorothy Graham. It uses an analogy of travel to different locations to explain what test coverage means and some caveats. Coverage refers to the relationship between tests and the parts of a system being tested, but achieving 100% coverage does not mean everything is tested. There are four caveats discussed: coverage only measures one aspect of testing, a single test can achieve coverage, coverage does not indicate quality, and it only applies to the existing system not missing pieces. The key recommendation is to ask "coverage of what?" when the term is used rather than assuming more coverage is always better.
Growing a Company Test Community: Roles and Paths for TestersTEST Huddle
Over the past three years, our company’s test team has grown from three lonesome testers to a community of nine – with more planned. Since we don’t see testers as “click monkeys”, but as valuable and integrated project members who bring a specific skill set to the table, it’s important for us to choose testers well and to train them in various areas so that they can contribute, grow and see their own career path within testing.
To structure to our internal tester training program, we have been developing role descriptions, education paths and career options for our testers, which I’d like to share with you in this webinar.
View webinar - https://huddle.eurostarsoftwaretesting.com/resource/webinar/growing-company-test-community-roles-paths-testers/
It’s the same argument again and again. One side says “team members should all be able to do everything, and the programmers should do their testing and all testers should be writing code”. The other side says “No, that can’t possibly work – programmers don’t know how to test, they don’t have the right mindset”. And on and on it goes.
http://huddle.eurostarsoftwaretesting.com/resource/webinar/need-testers-agile-teams/
In this webinar, Dave Haeffner (Elemental Selenium, USA) discusses how to:
- Build an integrated feedback loop to automate test runs and find issues fast
- Setup your own infrastructure or connect to a cloud provider
-Dramatically improve test times with parallelization
https://huddle.eurostarsoftwaretesting.com/resource/webinar/use-selenium-successfully/
Testers & Teams on the Agile Fluency™ Journey TEST Huddle
The document discusses the Agile Fluency model, which aims to help teams and testers improve their agile skills and practices over time. It describes a pathway with increasing levels of fluency that provide more benefits, including delivering value, optimizing value, and innovating. Reaching higher levels requires investments in training, coaching, and changing team structures and roles. The model can help organizations determine what level of fluency they need and what investments are required for testing teams to operate at that level.
Practical Test Strategy Using HeuristicsTEST Huddle
Key Takeaways
- See what makes a good test strategy
- Learn how to make a thorough test strategy
- Identify what is the ‘Heuristic Test Strategy Model’ is
- Develop a solid test strategy that fits fast
- Discover how diversification can help you to create a test strategy
Key Takeaways:
- A diagramming method that helps discuss roles
- A one page analysis heuristic for roles
- Why roles matter on projects
https://huddle.eurostarsoftwaretesting.com/resource/people-skills/thinking-through-your-role/
Key Takeaways:
- What will this release contain
- What impact will it have on your test runs
- How can you preserve your existing investment in tests using the Selenium WebDriver APIs, and your even older RC tests
- Looking forward, when will the W3C spec be complete
- What can we expect from Selenium 4
https://huddle.eurostarsoftwaretesting.com/
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
Albert Witteveen - With Cloud Computing Who Needs Performance Testing
1. With Cloud Computing,
Who Needs Performance Testing?
Albert Witteveen, Pluton IT
www.eurostarconferences.com
@esconfs
#esconfs
Insert speaker
picture here, no
more than
150x150 pixels
4. You just woke up after a 10 years nap:
Team member:
“We can add extra processing power and memory on the fly.
An extra database has a lead time of two weeks”
5. Does this sound familiar:
Performance test: everything OK
Day 1 on production: we end up adding more than four times
the hardware
6. 1. the tools simulate but are not quite equal
2. load profiles are based on too many assumptions
3. we report more accurately than we can measure
4. long setup time → limited amount of tests
5. we hide it all in complex reports
7. We send and accept the same requests and responses but
can't anticipate slight changes
In production, a lot more is going on then just our test
Did we really get a good response
Similar hardware is expensive
8. Cloud computing: adding extra hardware can be done on the
fly and on a moments notice
With the high costs of performance testing and how easy we
can 'speed things up' if needed:
Why bother testing? The money is better spent on that extra
hardware
9. Just start with an overkill of hardware and scale down to what is
actually used!
10.
11.
12. Computers are running or idling.
The queuing theory is an established model for performance
engineers
It can describe the behavior of systems on every layer
13.
14.
15.
16. Queuing center: a location in our system where waiting
(queuing) occurs a Bottleneck if you will
◦ They can exist anywhere: CPU, Memory, Network, IO, other systems
◦ There is always one or more queuing centers
◦ A queuing center really determines the performance
◦ The queuing center provide key information on scalability
◦ Service and wait time are the real components of performance
17. Queuing model describe anything: large connected systems,
small, embedded ...
You can 'zoom in' and the model can describe the behavior or
the server
You can keep zooming in to CPU, network etc.
18. Multiple zoom levels
Residence time = wait + service time
There is always a queuing center
No queuing center found: look harder
19. Cloud computing not infinite:
Financial limit
Technical: IO/Network/CPU speed per process
We don't build supercomputers to calculate a mortgage offer
20. Always find the queuing centers
Based on the result: judge 'yes we are likely to meet
requirement X Y and Z'
Show where the risks are 'requirement x cannot be feasibly
met for function y'
Explore the risks
21. Explore identified resource heavy components with
stakeholders, developers and oracles
◦ Other use of this component?
◦ Real frequency of usage?
◦ Validity of the (generic) requirement for this function?
Place the results in context:
◦ You may have a bigger issue than you thought
◦ Or it is actually OK for this usage
22. Define a set of key functions/use cases with stakeholders and
experts (i.e. functional testers)
Per test identify at least one queuing center
Compare with generic requirements
◦ Can meet ?
◦ Risk exist → explore → place in context →define further test
The model allows you to place real behavior in context and a
realistic assessment of risk
23. If no queuing center was found → monitoring was not
sufficient
Queuing centers:
◦ Tell you about the risks to core functionality: performance and
financial
◦ Tell you on the ability to scale
◦ Improve response time in scaling up
24. Stakeholders don't (necessarily) understand queuing models
Explain in what matters to them: i.e. when making the offer it
takes 15 seconds to generate
Think of the systems as queuing systems and explain
behavior
25. Knowing what the behavior is can tell you:
◦ if you can handle requirements
◦ how to scale if needed
◦ estimate if performance can be met within budget
◦ if you need to adapt your cloud (i.e. improve IO/network, CPU)
So yes: it still makes sense to do performance testing
26. Batch process tested to be run from multiple servers
Process needed to be faster
Risk: 'on-line' processes on server should not be impacted
Finding: 3 servers, three times as fast. But no queuing center
found???
Deep diving in CPU monitoring showed the queuing center:
Process was pausing/waiting after each cycle
Conclusion: → on-line processes not impacted as there was
sufficient CPU time for other processes
27. Stress point found
Unclear where queuing center was
Cause: JAVA memory management can be deceiving on OS
level.
Rule that the queuing center needed to be found made us find
out. The absence of a queuing center makes you look further