Alexey Shpakov presents on testing in Jira Frontend. He discusses the testing pyramid with unit, integration, and end-to-end tests. He then introduces the concept of a "testing hourglass" which adds deployment and post-deployment verification to the pyramid. Key aspects of each type of test are discussed such as using feature flags, monitoring for flaky tests, and gradual rollouts to reduce risk.
Bringing Quality Design Systems to Life with Storybook & ApplitoolsApplitools
** Full webinar recording: https://youtu.be/R6WnEzlMHac **
Bringing design systems and component libraries to life can be a large, challenging process without the right tools. On top of that, maintaining a high level of quality throughout those systems brings its own challenge.
While there’s no shortage of ways to manually create a design system from scratch, doing so can be time consuming and can lead to technical debt when the system itself isn’t itself in a structure.
Storybook is a tool for developers that helps bring design systems and component libraries to life, providing structured tooling and a web dashboard. It gives those developers, and even designers, a way to focus on each individual component while being able to see the system from a higher perspective.
On top of that, Applitools is an automated Visual Testing solution that easily stacks right on top of Storybook with the Storybook Eyes SDK. With a single command, Applitools provides full test coverage for each component of your design system.
Join Developer Advocate, Colby Fayock, as he walks through:
How to take advantage of using Storybook to build scalable design systems
How Applitools makes automating the testing of those components easy
How to focus on building great experiences while automating quality checks with visual testing
Stop Testing (Only) The Functionality of Your Mobile Apps!Applitools
The document discusses different strategies for testing mobile apps, including testing functionality, usability, performance, and more extensively than just functionality. It addresses testing on real devices versus emulators, the need for accessibility testing, and tools for testing areas like contrast, text-to-speech, location services and network bandwidth. The document also discusses visual testing strategies like using AI to detect visual differences and validating user interfaces.
[webinar] Best of Breed: Successful Test Automation Practices from Innovative...Applitools
While test automation is a struggle for most teams everywhere, there are companies who have mastered their technique and are executing a very successful test automation strategy.
In this talk, Angie Jones shares the research on how top companies and global brands are approaching test automation, and successfully implementing it.
Angie was joined by a panel of QA executives, who also shared what they are seeing in the industry in regards to successful (and not so successful) test automation practices:
* Theresa Neate - QA Practise Lead @ Real Estate Group
* Amrit Sadhab - Digital Practise Lead @ Origin Energy
* Karen Mangio - QA Practise Lead @ NAB Mobile
* Cameron Bradley - Head of QA @ Tabcorp
Testing Design System Changes Across Your Application -- Intuit Use Case -- w...Applitools
This document discusses Intuit's use of design systems and testing methods. It provides context on Intuit as a company and the speaker's role. It defines what design systems are and how they are used at Intuit for products like TurboTax to maintain design consistency at scale. The document outlines the goals of testing design systems broadly across functionality, visuals, performance, accessibility, and more. It walks through Intuit's workflow and tools for testing design system changes, including unit/integration tests, accessibility checks, performance tests, visual regression testing, and testing components and full page mocks. Additional bonus tools used in Intuit's testing setup are also mentioned.
Functional to Visual: AI-powered UI Testing from Testim and ApplitoolsApplitools
As leaders in the application of AI to test automation, Applitools and Testim have come together to simplify test creation, maintenance and execution. Join this webinar to learn how you can elevate your approach to test automation with AI-powered codeless functional and visual UI testing.
The rise of DevOps and the increase in developer-QA collaboration has led to the introduction of new testing frameworks such as Espresso and XCUITest.
Join us and learn how organizations are improving pipeline efficiency by adding Espresso to their CI process as well as learn the basic concepts of instrumented test tools such as Espresso and XCUITest. This webinar will cover:
-Latest market trends causing this shift and why organizations are moving from Appium to Espresso
-For each framework (Espresso, XCUITest and Appium), we will cover:
-Characteristics
-Technology/Architecture
-Pros & Cons
-Demo of Espresso
"Software Quality in the Service of Innovation in the Insurance Industry"Applitools
This document discusses software quality in the insurance industry. It introduces Joe Emison, CTO of Branch Insurance, and discusses how Branch builds software frequently in small increments using test automation. It notes challenges with traditional test automation approaches and outlines Branch's approach using unit tests, API tests, and data-driven end-to-end tests run continuously. The document also discusses how ProdPerfect and Applitools can work together to provide effortless functional and visual testing through data analysis, test case discovery, and visual AI.
Web Accessibility Testing Trends and Shift Left Testing of accessibility usin...Narayanan Palani
Accessibility Testing is not easy and it needs right expertise and focus on WCAG guidelines to perform test design, execution and reporting right defects to prevent accessibility violations. This presentation helps in highlighting some of the key issues to address and best practises to use during accessibility testing.
Bringing Quality Design Systems to Life with Storybook & ApplitoolsApplitools
** Full webinar recording: https://youtu.be/R6WnEzlMHac **
Bringing design systems and component libraries to life can be a large, challenging process without the right tools. On top of that, maintaining a high level of quality throughout those systems brings its own challenge.
While there’s no shortage of ways to manually create a design system from scratch, doing so can be time consuming and can lead to technical debt when the system itself isn’t itself in a structure.
Storybook is a tool for developers that helps bring design systems and component libraries to life, providing structured tooling and a web dashboard. It gives those developers, and even designers, a way to focus on each individual component while being able to see the system from a higher perspective.
On top of that, Applitools is an automated Visual Testing solution that easily stacks right on top of Storybook with the Storybook Eyes SDK. With a single command, Applitools provides full test coverage for each component of your design system.
Join Developer Advocate, Colby Fayock, as he walks through:
How to take advantage of using Storybook to build scalable design systems
How Applitools makes automating the testing of those components easy
How to focus on building great experiences while automating quality checks with visual testing
Stop Testing (Only) The Functionality of Your Mobile Apps!Applitools
The document discusses different strategies for testing mobile apps, including testing functionality, usability, performance, and more extensively than just functionality. It addresses testing on real devices versus emulators, the need for accessibility testing, and tools for testing areas like contrast, text-to-speech, location services and network bandwidth. The document also discusses visual testing strategies like using AI to detect visual differences and validating user interfaces.
[webinar] Best of Breed: Successful Test Automation Practices from Innovative...Applitools
While test automation is a struggle for most teams everywhere, there are companies who have mastered their technique and are executing a very successful test automation strategy.
In this talk, Angie Jones shares the research on how top companies and global brands are approaching test automation, and successfully implementing it.
Angie was joined by a panel of QA executives, who also shared what they are seeing in the industry in regards to successful (and not so successful) test automation practices:
* Theresa Neate - QA Practise Lead @ Real Estate Group
* Amrit Sadhab - Digital Practise Lead @ Origin Energy
* Karen Mangio - QA Practise Lead @ NAB Mobile
* Cameron Bradley - Head of QA @ Tabcorp
Testing Design System Changes Across Your Application -- Intuit Use Case -- w...Applitools
This document discusses Intuit's use of design systems and testing methods. It provides context on Intuit as a company and the speaker's role. It defines what design systems are and how they are used at Intuit for products like TurboTax to maintain design consistency at scale. The document outlines the goals of testing design systems broadly across functionality, visuals, performance, accessibility, and more. It walks through Intuit's workflow and tools for testing design system changes, including unit/integration tests, accessibility checks, performance tests, visual regression testing, and testing components and full page mocks. Additional bonus tools used in Intuit's testing setup are also mentioned.
Functional to Visual: AI-powered UI Testing from Testim and ApplitoolsApplitools
As leaders in the application of AI to test automation, Applitools and Testim have come together to simplify test creation, maintenance and execution. Join this webinar to learn how you can elevate your approach to test automation with AI-powered codeless functional and visual UI testing.
The rise of DevOps and the increase in developer-QA collaboration has led to the introduction of new testing frameworks such as Espresso and XCUITest.
Join us and learn how organizations are improving pipeline efficiency by adding Espresso to their CI process as well as learn the basic concepts of instrumented test tools such as Espresso and XCUITest. This webinar will cover:
-Latest market trends causing this shift and why organizations are moving from Appium to Espresso
-For each framework (Espresso, XCUITest and Appium), we will cover:
-Characteristics
-Technology/Architecture
-Pros & Cons
-Demo of Espresso
"Software Quality in the Service of Innovation in the Insurance Industry"Applitools
This document discusses software quality in the insurance industry. It introduces Joe Emison, CTO of Branch Insurance, and discusses how Branch builds software frequently in small increments using test automation. It notes challenges with traditional test automation approaches and outlines Branch's approach using unit tests, API tests, and data-driven end-to-end tests run continuously. The document also discusses how ProdPerfect and Applitools can work together to provide effortless functional and visual testing through data analysis, test case discovery, and visual AI.
Web Accessibility Testing Trends and Shift Left Testing of accessibility usin...Narayanan Palani
Accessibility Testing is not easy and it needs right expertise and focus on WCAG guidelines to perform test design, execution and reporting right defects to prevent accessibility violations. This presentation helps in highlighting some of the key issues to address and best practises to use during accessibility testing.
[TAQfull Meetup] Angie Jones + Expert Panel: Best Practices in Quality Manage...Applitools
** Meetup session recording: https://youtu.be/Vo-PhgrOT0A **
While test automation is a struggle for most teams across the globe, there are companies who have mastered this -- and are executing a very successful test automation strategy.
In this special 90-minute live session, industry thought-leader Angie Jones shares the research on how top brands & global companies are approaching test automation, how they are successfully implementing it, and what are their building blocks for their top-notch quality teams.
Angie was joined by Quality Engineering executives, who shared what they are seeing in the industry in regards to successful (and not so successful) test automation practices:
* Stuart Day - Principal QA - Digital @ Dunelm and co-founder/ organizer @ TAQfull Meet-Up
* Marie Drake - Principle Test Automation Engineer @ NewsUK / Cypress.io Ambassador
* Matt Lowry - Principle Test Engineer @ BP (via ECS)
Myth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan LippsApplitools
** Full webinar recording -- https://youtu.be/ihpAsmRtGuM **
Artificial Intelligence and Machine Learning (AI/ML) have seen application in a variety of fields, including the automation of QA tasks. But what are they exactly? What distinguishes different instances and applications of AI, for example? What are the horizons of these technologies in the field of QA?
The promise of AI/ML must be understood correctly to be harnessed appropriately. As with any buzzword, many technologies and products are offered under the guise of AI/ML without satisfying the definition. The industry is reforming itself around the promise that AI/ML holds often without a clear understanding of the technical limitations that give the promise its boundaries.
In this webinar, test automation guru Jonathan Lipps gives a detailed overview of the concepts that underpin AI/ML, and discuss their ramifications for the work of QA automation.
In addition to a discussion of AI/ML in general, Jonathan looks at examples from the QA industry. These examples will help give attendees the basic understanding required to cut through the marketing language. so we can clearly evaluate AI/ML solutions, and calibrate expectations about the benefit of AI/ML in QA, both as it stands today and in the future.
This document discusses the evolution of software testing and how artificial intelligence is being applied to improve testing. It provides examples of intelligent testing tools that use AI to reduce flaky tests, increase test coverage, prioritize test cases, and predict failures. Such tools apply techniques like machine learning algorithms, computer vision, and natural language processing to testing tasks. The document also compares several popular AI-based testing tools and their pricing structures.
Modern Functional Test Automation Through Visual AI - webinar w/ Raja Rao Applitools
** Full webinar recording here: https://youtu.be/EaISHnCjNGY **
"I am confident that once you give this approach a try, you will rethink your entire current code-based approach" -- Raja Rao, Head of Test Automation University
In this webinar, you'll see the modern way or the intelligent way of doing web and mobile testing. Specifically, functional, end-to-end UI testing.
The analogy is a gasoline car versus an electric car: both are cars, both need tires, seats, breaks, etc... but the core engine that moves the car is different -- which makes a huge difference.
The main idea here is that, once the functionality in an app happens (for example: logging into an app), you simply take a screenshot of the resulting page or resulting state of the app, and take screenshots every time you run the test and compare them with the original screenshot using Visual AI (instead of pixel-by-pixel comparison, or DOM-diffing). If there is a difference, then the AI will highlight only meaningful differences and ignores differences that we humans ignore.
You'll see that by using this approach where you delegate a lot of work the Visual AI, you'll see exponential benefits, such as up to 5X increase in the number of bugs found, up to 10X less code and so on.
In this webinar, Raja Rao compares several typical functional testing use cases to show how it actually works.
Talking points:
* What is modern functional testing
* What is "Visual AI" -- and why you need it
* Deeply analyze legacy code based functional test and compare it with the modern approach (number of lines, locators, labels, etc...)
* Compare legacy versus modern code by going over some use cases and approaches, such as Data-driven testing, Sorting an HTML table, Testing a dynamic bar chart, Testing iFrames, Testing dynamic pages, etc…
By applying linting (static code analysis) tools to test code, preferably the same tools as for application code, tests can be improved which can eventually lead to better maintainability, readability and more robust tests, without even running them!
The basics of XCTest and XCUITest
How to write your first XCUITest
Ways to improve your continuous testing efforts using XCUITest, including Recorder, Query, Interactions, assertion methods, and HAR
Awesome Test Automation Made Simple w/ Dave HaeffnerSauce Labs
Learn how to build simple and powerful automated tests that will work on the browsers you care about, cover visual testing and functional regressions, and be configured to run automatically through the use of a continuous integration (CI) server.
Test Automation Frameworks: Assumptions, Concepts & ToolsAmit Rawat
The document discusses factors to consider when selecting a test automation framework. It describes how there are many options for frameworks available and outlines important criteria to evaluate, such as flexibility, ability to support different applications and interfaces, tool and language independence, parallel execution, and design patterns. The presentation provides examples of different types of frameworks and discusses strategies for building frameworks that can scale and evolve with changing needs.
Why Apps Succeed: 4 Keys to Winning the Digital Quality GameAustin Marie Gay
Every company with a digital presence aims at delivering a great digital experience. But why do some web and mobile apps succeed better than others? As part of our ongoing search to find out, we surveyed over 1,000 technical experts and business leaders from various industries.
Join us for a live webinar as we discuss the findings of this report with experts from Perfecto, Cigna and Shop.com! Topics include:
-The four main obstacles preventing digital success and how to overcome them
-How web & mobile teams are organized to meet the demand for faster releases
-The digital testing strategies that increase velocity and allow teams to keep up with consumer demand
-Why automation and real-user condition testing is critical for achieving success
Real Devices or Emulators: Wen to use What for Automated TestingSauce Labs
Join analyst David Gehringer of Dimensional Research and Sauce Labs in a Webinar that covers their recent research into how QA and dev engineers choose to test across emulators and real devices. Also, we’ll show you a demo of the Sauce Labs Real Device Cloud and how you can implement best practices of testing on both emulators and real devices to optimize your time and money.
Automated Visual Testing at Scale : Real-life Example from Dow JonesApplitools
** Full webinar recording can be seen here: https://youtu.be/b2D8WQCOCJw **
In this session -- hosted by Sumeet Mandloi, Engineering Director @ Dow Jones (Wall Street Journal), and Eran Barlev (Sr. Customer Success Engineer @ Applitools) -- you’ll learn how you can easily avoid front-end bugs and visual regressions, as well as substantially increase coverage, by adding automated visual testing to your existing automated tests.
In addition, Mr. Mandloi shared real-life tips on how to run automated visual testing at scale, with implementation examples from Dow Jones.
Watch this session, and learn how to:
-- Successfully perform large-scale automated visual testing
-- Leverage visual testing to increase coverage, while reducing maintenance efforts
-- Run cross-browser tests with visual validation in the cloud
-- Add visual validations to your existing automated functional and unit tests
In agile software development world, we are dealing with many test tasks such as user story testing, exploratory testing, check-list based testing, regression testing, performance testing, security testing in each sprint. Besides these testing activities, one of the test types which is considerably getting crucial is visual regression testing.
Visual regression testing focuses on to check visual contents and animations, page layout, and responsive design of a website/app. Because of the limits of human vision, human-based visual regression testing is generally error-prone and cumbersome. Hence, automation is inevitable. It enables us to run the tests much more precisely in a short time period. Also, it saves us a significant amount of time to deal with other manual test activities in each sprint.
In this talk, we will walk through well-known open-source and commercial solutions for visual test automation. We will learn which technologies they use, what type of visual tests they are suitable for, and their major differences between each other. Besides this overview, we will also make a real-life visual test automation demo by using Selenium, ImageMagick, and AShot.
The document discusses using Applitools for visual AI testing. It provides an overview of Applitools, which allows testing user interfaces through visual assertions rather than traditional code-based assertions. This avoids issues like managing locators and adjusting assertions. The document then demonstrates how to set up and implement visual AI testing using Applitools in Java, including configuring Eyes, setting up tests, and executing visual checks on elements and full pages. It concludes that while visual AI testing has advantages, a hybrid approach combining it with traditional testing may be best to avoid false positives.
Wrong Tool, Wrong Time: Re-Thinking Test Automation -- w/ State of Visual Tes...Applitools
Full webinar recording:
Go through this presentation and on-demand session to learn: What Are The World’s Most Innovative Testing Teams Doing That You Are Not?
As much as we all hate to admit it, our test automation efforts are struggling. Coverage is dropping. Bugs are escaping to production. Our apps are visually complex, growing rapidly, delivered continuously, and changing constantly - so much so that our functional framework is now bloated, broken, and unable to keep up with Agile and CI-CD release best practices.
No wonder that in our latest State of Visual Testing research, the majority of companies surveyed reported that their CI-CD and automation processes are not helping them to successfully compete in today's fast-paced ecosystem, and are not effective in ensuring software quality in a scalable and robust way.
But what about those elite testing teams that got it right? What's their secret? Can we copy what they did, instead of setting ourselves to fail?
With this presentation, and on-demand session discussing it, learn how the 10% of the world’s most innovative testing teams have reinvented their test automation to support a fully automated CI-CD process, and guaranteed their company's digital transformation was a success.
Use these resources to learn:
-- Why the majority of test automation efforts are falling behind
-- How your QA and testing efforts compare to these elite teams -- via live polling results
-- 4 modern techniques that the top 10% of testing teams globally are doing every day, and that you can do too
Enterprise Ready Test Execution Platform for Mobile AppsVijayan Srinivasan
When it comes to Mobile test execution, appium framework is the default choice of engineers for writing test cases. Running the appium testcases against multiple Android versions in parallel can be achieved via another open source tool called selenium grid.
Unfortunately selenium grid is not enterprise ready. Meaning the selenium grid cannot be used as a single test execution platform across enterprise level companies due to following issues
• Not available as a Web Application to run from Intuit Standard Containers (Tomcat, WHP)
• Device registry is maintained in-memory
• No support for High Availability / Disaster Recovery
• No support for External Device Cloud
• Not much debugging support (Screenshot, Exception or Log messages)
This talk will be covering the limitations of selenium grid and how Intuit modified the selenium grid to suit for enterprise needs.
I will share my experience of SDLC enablement on enterprise level. Uncover pitfalls and gotchas about building of developer friendly CI enabled service using industry standard static and dynamic scanning tools, CI platforms, ReportPortal, Carrier platform and Jira integration service.
In the past, Quality Assurance (QA) teams have relied on manual checks to look for visual issues, while testing on large and dynamic websites. However, with so many sites and features being added to the websites, this approach has proven time and resource intensive while also allowing critical visual bugs to slip into live websites causing brand damage and a poor digital experience for visitors.
Join this exciting webinar to learn:
How NSW Government Digital Channels’ Principal Quality Assurance Engineer, Sabbir Subhan and his QA team engineered the processes and tools that provide the ‘sweet spot’ between human and machine when it comes to visual QA testing. Understand how to shorten release cycles with Visual AI.
The document provides an overview of accessibility testing, standards, and implementation strategies. It discusses testing tools like screen readers and plugins that can be used to check for keyboard navigation, form labels, audio/video, and touch target size. It also outlines common web accessibility standards like WCAG 2.0/2.1 and Section 508, and recommends involving users with disabilities in testing. The document concludes by offering tips for establishing an organizational commitment to accessibility and an inclusive design process from the start of a project.
One of the common challenges in the digital space is improving the speed of releases without compromising the of quality of your app. The root of the problem is the market - customer expectations are on the rise, the app market is crowded, and app development is difficult. The solution is test automation.
Watch Perfecto and Infostretch demonstrate Quantum, an established open-source test framework, to run robust, repeatable, and continuous test scenarios.
In this technical webinar, the audience will learn how to use the test framework to
-Create robust and maintainable test automation scripts
-Extend open-source with advanced automation capabilities
-Execute cross-platform mobile and web tests in parallel
-Plug the newly created tests easily to the CI (Continuous Integration) workflow
-Drive fast developer feedback with an advanced reporting library
Most of the people might say that software test engineers do not write code. Testers normally need completely different skill set which could be a mix of Java, C, Ruby, and Python.
That is not all you require to be a successful tester. A tester requires having a good knowledge of the software manuals and automation tools.
Depending on the complexity of a project, a software testing engineer may write more complicated code than the developer.
The document discusses why software developers should use FlexUnit, an automated unit testing framework for Flex and ActionScript projects. It notes that developers spend 80% of their time debugging code and that errors found later in the development process can cost 100x more to fix than early errors. FlexUnit allows developers to automate unit tests so that tests can be run continually, finding errors sooner when they are cheaper to fix. Writing automated tests also encourages developers to write better structured, more testable and maintainable code. FlexUnit provides a testing architecture and APIs to facilitate automated unit and integration testing as well as different test runners and listeners to output test results.
[TAQfull Meetup] Angie Jones + Expert Panel: Best Practices in Quality Manage...Applitools
** Meetup session recording: https://youtu.be/Vo-PhgrOT0A **
While test automation is a struggle for most teams across the globe, there are companies who have mastered this -- and are executing a very successful test automation strategy.
In this special 90-minute live session, industry thought-leader Angie Jones shares the research on how top brands & global companies are approaching test automation, how they are successfully implementing it, and what are their building blocks for their top-notch quality teams.
Angie was joined by Quality Engineering executives, who shared what they are seeing in the industry in regards to successful (and not so successful) test automation practices:
* Stuart Day - Principal QA - Digital @ Dunelm and co-founder/ organizer @ TAQfull Meet-Up
* Marie Drake - Principle Test Automation Engineer @ NewsUK / Cypress.io Ambassador
* Matt Lowry - Principle Test Engineer @ BP (via ECS)
Myth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan LippsApplitools
** Full webinar recording -- https://youtu.be/ihpAsmRtGuM **
Artificial Intelligence and Machine Learning (AI/ML) have seen application in a variety of fields, including the automation of QA tasks. But what are they exactly? What distinguishes different instances and applications of AI, for example? What are the horizons of these technologies in the field of QA?
The promise of AI/ML must be understood correctly to be harnessed appropriately. As with any buzzword, many technologies and products are offered under the guise of AI/ML without satisfying the definition. The industry is reforming itself around the promise that AI/ML holds often without a clear understanding of the technical limitations that give the promise its boundaries.
In this webinar, test automation guru Jonathan Lipps gives a detailed overview of the concepts that underpin AI/ML, and discuss their ramifications for the work of QA automation.
In addition to a discussion of AI/ML in general, Jonathan looks at examples from the QA industry. These examples will help give attendees the basic understanding required to cut through the marketing language. so we can clearly evaluate AI/ML solutions, and calibrate expectations about the benefit of AI/ML in QA, both as it stands today and in the future.
This document discusses the evolution of software testing and how artificial intelligence is being applied to improve testing. It provides examples of intelligent testing tools that use AI to reduce flaky tests, increase test coverage, prioritize test cases, and predict failures. Such tools apply techniques like machine learning algorithms, computer vision, and natural language processing to testing tasks. The document also compares several popular AI-based testing tools and their pricing structures.
Modern Functional Test Automation Through Visual AI - webinar w/ Raja Rao Applitools
** Full webinar recording here: https://youtu.be/EaISHnCjNGY **
"I am confident that once you give this approach a try, you will rethink your entire current code-based approach" -- Raja Rao, Head of Test Automation University
In this webinar, you'll see the modern way or the intelligent way of doing web and mobile testing. Specifically, functional, end-to-end UI testing.
The analogy is a gasoline car versus an electric car: both are cars, both need tires, seats, breaks, etc... but the core engine that moves the car is different -- which makes a huge difference.
The main idea here is that, once the functionality in an app happens (for example: logging into an app), you simply take a screenshot of the resulting page or resulting state of the app, and take screenshots every time you run the test and compare them with the original screenshot using Visual AI (instead of pixel-by-pixel comparison, or DOM-diffing). If there is a difference, then the AI will highlight only meaningful differences and ignores differences that we humans ignore.
You'll see that by using this approach where you delegate a lot of work the Visual AI, you'll see exponential benefits, such as up to 5X increase in the number of bugs found, up to 10X less code and so on.
In this webinar, Raja Rao compares several typical functional testing use cases to show how it actually works.
Talking points:
* What is modern functional testing
* What is "Visual AI" -- and why you need it
* Deeply analyze legacy code based functional test and compare it with the modern approach (number of lines, locators, labels, etc...)
* Compare legacy versus modern code by going over some use cases and approaches, such as Data-driven testing, Sorting an HTML table, Testing a dynamic bar chart, Testing iFrames, Testing dynamic pages, etc…
By applying linting (static code analysis) tools to test code, preferably the same tools as for application code, tests can be improved which can eventually lead to better maintainability, readability and more robust tests, without even running them!
The basics of XCTest and XCUITest
How to write your first XCUITest
Ways to improve your continuous testing efforts using XCUITest, including Recorder, Query, Interactions, assertion methods, and HAR
Awesome Test Automation Made Simple w/ Dave HaeffnerSauce Labs
Learn how to build simple and powerful automated tests that will work on the browsers you care about, cover visual testing and functional regressions, and be configured to run automatically through the use of a continuous integration (CI) server.
Test Automation Frameworks: Assumptions, Concepts & ToolsAmit Rawat
The document discusses factors to consider when selecting a test automation framework. It describes how there are many options for frameworks available and outlines important criteria to evaluate, such as flexibility, ability to support different applications and interfaces, tool and language independence, parallel execution, and design patterns. The presentation provides examples of different types of frameworks and discusses strategies for building frameworks that can scale and evolve with changing needs.
Why Apps Succeed: 4 Keys to Winning the Digital Quality GameAustin Marie Gay
Every company with a digital presence aims at delivering a great digital experience. But why do some web and mobile apps succeed better than others? As part of our ongoing search to find out, we surveyed over 1,000 technical experts and business leaders from various industries.
Join us for a live webinar as we discuss the findings of this report with experts from Perfecto, Cigna and Shop.com! Topics include:
-The four main obstacles preventing digital success and how to overcome them
-How web & mobile teams are organized to meet the demand for faster releases
-The digital testing strategies that increase velocity and allow teams to keep up with consumer demand
-Why automation and real-user condition testing is critical for achieving success
Real Devices or Emulators: Wen to use What for Automated TestingSauce Labs
Join analyst David Gehringer of Dimensional Research and Sauce Labs in a Webinar that covers their recent research into how QA and dev engineers choose to test across emulators and real devices. Also, we’ll show you a demo of the Sauce Labs Real Device Cloud and how you can implement best practices of testing on both emulators and real devices to optimize your time and money.
Automated Visual Testing at Scale : Real-life Example from Dow JonesApplitools
** Full webinar recording can be seen here: https://youtu.be/b2D8WQCOCJw **
In this session -- hosted by Sumeet Mandloi, Engineering Director @ Dow Jones (Wall Street Journal), and Eran Barlev (Sr. Customer Success Engineer @ Applitools) -- you’ll learn how you can easily avoid front-end bugs and visual regressions, as well as substantially increase coverage, by adding automated visual testing to your existing automated tests.
In addition, Mr. Mandloi shared real-life tips on how to run automated visual testing at scale, with implementation examples from Dow Jones.
Watch this session, and learn how to:
-- Successfully perform large-scale automated visual testing
-- Leverage visual testing to increase coverage, while reducing maintenance efforts
-- Run cross-browser tests with visual validation in the cloud
-- Add visual validations to your existing automated functional and unit tests
In agile software development world, we are dealing with many test tasks such as user story testing, exploratory testing, check-list based testing, regression testing, performance testing, security testing in each sprint. Besides these testing activities, one of the test types which is considerably getting crucial is visual regression testing.
Visual regression testing focuses on to check visual contents and animations, page layout, and responsive design of a website/app. Because of the limits of human vision, human-based visual regression testing is generally error-prone and cumbersome. Hence, automation is inevitable. It enables us to run the tests much more precisely in a short time period. Also, it saves us a significant amount of time to deal with other manual test activities in each sprint.
In this talk, we will walk through well-known open-source and commercial solutions for visual test automation. We will learn which technologies they use, what type of visual tests they are suitable for, and their major differences between each other. Besides this overview, we will also make a real-life visual test automation demo by using Selenium, ImageMagick, and AShot.
The document discusses using Applitools for visual AI testing. It provides an overview of Applitools, which allows testing user interfaces through visual assertions rather than traditional code-based assertions. This avoids issues like managing locators and adjusting assertions. The document then demonstrates how to set up and implement visual AI testing using Applitools in Java, including configuring Eyes, setting up tests, and executing visual checks on elements and full pages. It concludes that while visual AI testing has advantages, a hybrid approach combining it with traditional testing may be best to avoid false positives.
Wrong Tool, Wrong Time: Re-Thinking Test Automation -- w/ State of Visual Tes...Applitools
Full webinar recording:
Go through this presentation and on-demand session to learn: What Are The World’s Most Innovative Testing Teams Doing That You Are Not?
As much as we all hate to admit it, our test automation efforts are struggling. Coverage is dropping. Bugs are escaping to production. Our apps are visually complex, growing rapidly, delivered continuously, and changing constantly - so much so that our functional framework is now bloated, broken, and unable to keep up with Agile and CI-CD release best practices.
No wonder that in our latest State of Visual Testing research, the majority of companies surveyed reported that their CI-CD and automation processes are not helping them to successfully compete in today's fast-paced ecosystem, and are not effective in ensuring software quality in a scalable and robust way.
But what about those elite testing teams that got it right? What's their secret? Can we copy what they did, instead of setting ourselves to fail?
With this presentation, and on-demand session discussing it, learn how the 10% of the world’s most innovative testing teams have reinvented their test automation to support a fully automated CI-CD process, and guaranteed their company's digital transformation was a success.
Use these resources to learn:
-- Why the majority of test automation efforts are falling behind
-- How your QA and testing efforts compare to these elite teams -- via live polling results
-- 4 modern techniques that the top 10% of testing teams globally are doing every day, and that you can do too
Enterprise Ready Test Execution Platform for Mobile AppsVijayan Srinivasan
When it comes to Mobile test execution, appium framework is the default choice of engineers for writing test cases. Running the appium testcases against multiple Android versions in parallel can be achieved via another open source tool called selenium grid.
Unfortunately selenium grid is not enterprise ready. Meaning the selenium grid cannot be used as a single test execution platform across enterprise level companies due to following issues
• Not available as a Web Application to run from Intuit Standard Containers (Tomcat, WHP)
• Device registry is maintained in-memory
• No support for High Availability / Disaster Recovery
• No support for External Device Cloud
• Not much debugging support (Screenshot, Exception or Log messages)
This talk will be covering the limitations of selenium grid and how Intuit modified the selenium grid to suit for enterprise needs.
I will share my experience of SDLC enablement on enterprise level. Uncover pitfalls and gotchas about building of developer friendly CI enabled service using industry standard static and dynamic scanning tools, CI platforms, ReportPortal, Carrier platform and Jira integration service.
In the past, Quality Assurance (QA) teams have relied on manual checks to look for visual issues, while testing on large and dynamic websites. However, with so many sites and features being added to the websites, this approach has proven time and resource intensive while also allowing critical visual bugs to slip into live websites causing brand damage and a poor digital experience for visitors.
Join this exciting webinar to learn:
How NSW Government Digital Channels’ Principal Quality Assurance Engineer, Sabbir Subhan and his QA team engineered the processes and tools that provide the ‘sweet spot’ between human and machine when it comes to visual QA testing. Understand how to shorten release cycles with Visual AI.
The document provides an overview of accessibility testing, standards, and implementation strategies. It discusses testing tools like screen readers and plugins that can be used to check for keyboard navigation, form labels, audio/video, and touch target size. It also outlines common web accessibility standards like WCAG 2.0/2.1 and Section 508, and recommends involving users with disabilities in testing. The document concludes by offering tips for establishing an organizational commitment to accessibility and an inclusive design process from the start of a project.
One of the common challenges in the digital space is improving the speed of releases without compromising the of quality of your app. The root of the problem is the market - customer expectations are on the rise, the app market is crowded, and app development is difficult. The solution is test automation.
Watch Perfecto and Infostretch demonstrate Quantum, an established open-source test framework, to run robust, repeatable, and continuous test scenarios.
In this technical webinar, the audience will learn how to use the test framework to
-Create robust and maintainable test automation scripts
-Extend open-source with advanced automation capabilities
-Execute cross-platform mobile and web tests in parallel
-Plug the newly created tests easily to the CI (Continuous Integration) workflow
-Drive fast developer feedback with an advanced reporting library
Most of the people might say that software test engineers do not write code. Testers normally need completely different skill set which could be a mix of Java, C, Ruby, and Python.
That is not all you require to be a successful tester. A tester requires having a good knowledge of the software manuals and automation tools.
Depending on the complexity of a project, a software testing engineer may write more complicated code than the developer.
The document discusses why software developers should use FlexUnit, an automated unit testing framework for Flex and ActionScript projects. It notes that developers spend 80% of their time debugging code and that errors found later in the development process can cost 100x more to fix than early errors. FlexUnit allows developers to automate unit tests so that tests can be run continually, finding errors sooner when they are cheaper to fix. Writing automated tests also encourages developers to write better structured, more testable and maintainable code. FlexUnit provides a testing architecture and APIs to facilitate automated unit and integration testing as well as different test runners and listeners to output test results.
assertYourself - Breaking the Theories and Assumptions of Unit Testing in Flexmichael.labriola
This document discusses automated testing in Flex. It begins by explaining why automated testing is important, such as reducing costs from software errors and allowing developers to change code without fear of breaking other parts of the project. It then covers topics like writing unit tests, using theories and data points to test over multiple values, and writing integration tests. The document emphasizes that writing testable code is key, and provides some principles for doing so, such as separating construction from application logic and using interfaces. It also discusses using fakes, stubs and mocks to isolate units for testing.
Continuous Integration testing based on Selenium and HudsonZbyszek Mockun
Open source tools in continuous integration environment, article describe who to use Selenium and Hudson to achieve CI testing.
Article has been written for Testing Experience magazine
Open Source tools in Continuous Integration environment (case study for agil...suwalki24.pl
Article wrote for Testing Experience magazine, publicated in December 2010.
The aim of this article is to share our experience in building and
managing Continuous Integration environments on the basis of
open-source tools like Hudson and Selenium. In this article we
will concentrate on testing purposes, suggest just few improvements
and describe our experience with using open-source tools.
The main idea is to present how to use automated tests reasonably
by minimizing the time spent on them while optimizing the
benefits that automated tests give us.
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Abdelkrim Boujraf
The document discusses the authors' experience with different testing strategies at their company StratEx. They initially used Selenium for UI testing but found it did not help when they frequently changed screens. They then investigated Test-Driven Development (TDD) but found it inefficient, as tests are also code that must be written and maintained. Behavior-Driven Development (BDD) showed more promise as it focuses on functionality rather than architecture and bridges communication between users and developers. However, no methodology fully describes large, complex systems. The search for the best testing approach is ongoing.
The document discusses unit testing fundamentals, including definitions of unit testing, benefits of unit testing like fewer bugs, and differences between unit and functional testing. It provides best practices for unit testing like tests running fast and in isolation. The document also covers test-driven development, exposing seams to make code testable, using mocking frameworks, and maintaining good test coverage.
The document discusses Test Driven Development (TDD). It describes the TDD cycle of writing an initially failing test, then code to pass the test, and refactoring. It proposes adopting TDD practices like writing tests for components before code and using continuous integration. It also discusses using code analysis tools in integration and avoiding tests sharing developer blind spots. Shortcomings discussed are difficulty testing interfaces and risk of false security from tests.
This is a 90 min talk with some exercises and discussion that I gave at the DHS Agile Expo. It places DevOps as a series of feedback loops and emphasizes agile engineering practices being at the core.
The document discusses shifting accessibility testing left in the software development lifecycle (SDLC) to reduce costs and bugs. It describes how automated testing tools like aXe can be integrated early in development using techniques like Selenium. The benefits of an earlier, automated approach are outlined, including lower costs to fix issues and fewer accessibility failures in production. Examples of accessibility linters for different platforms are also provided.
In computer programming and software testing, smoke testing (also confidence testing or sanity testing) is preliminary testing to reveal simple failures severe enough to (for example) reject a prospective software release.
This was a workshop given on the UTN University, for the Software Engineering students. The idea is to give a brief explanation about TDD, and how to use it.
The document discusses test-driven development (TDD) and how it helps software development through continuous feedback loops. TDD involves writing automated tests before code is written to drive the development process. This results in code that is modular, well-structured, and easy to modify through refactoring. The process of writing tests first helps clarify requirements and encourages loose coupling between components to facilitate testing. Multiple levels of testing, from unit to acceptance, provide feedback at different stages of development.
This document provides 5 insights to revolutionize software testing: 1) There are two types of code (experience and infrastructure) that require different testing approaches; 2) Testing should focus on capabilities rather than features; 3) Focus on testing techniques rather than individual test cases; 4) Testing should improve development rather than just find bugs; 5) Testing needs innovation to engage talent and avoid repetitive work. The author advocates shifting testing strategy to higher levels of abstraction and partnering with development to build quality in from the start.
Static analysis is most efficient when being used regularly. We'll tell you w...PVS-Studio
The document discusses best practices for using static code analysis tools to maximize their effectiveness. It recommends: 1) Marking false positives to reduce future messages, 2) Using incremental analysis to check modified files, 3) Checking files modified in the last few days, and 4) Running analysis nightly on a build server. Following all recommendations provides the highest return on investment in static analysis by catching errors earlier in development.
This document discusses testing practices for PL/SQL code. It begins with a scenario showing inadequate testing approaches and then advocates for more rigorous unit testing. Specifically, it recommends writing unit tests before coding, with the goals of improving code design, interfaces, and modularity. Automated, repeatable unit tests are preferable to ad-hoc testing. Writing tests first helps ensure requirements are fully met and allows developers to know when a program is complete. While initially time-consuming, formal unit testing saves time by reducing bugs and catching errors earlier.
Testing Experience - Evolution of Test Automation FrameworksŁukasz Morawski
Implementing automated tests is something that everybody wants to do. If you ask
any tester, test automation is their aim. And while it may be the golden target, very
few testers take pains to assess the required knowledge, under the illusion that a
programming language or expensive tool will suffice to cope with all problems likely
to arise. This is not true. Writing good automated tests is much harder than that,
requiring knowledge this article will make clear
Foundation level testing Concepts,Non function testing ,Non-Functional testing ,Selenium Tool,
What is Software Testing Software Testing is an activity in software development.
It is an investigation performed against a software to provide information about the quality of the software to stakeholders.
Software testing is associated with the two terms.
Validation: Are we doing the right job?
Verification: Are we doing the job right?
Case study "Virtual Show Room" – VSR,water fall model,General Principles of Testing,
The General V-Model
Unit Testing
Component Testing
Integration Testing
System Testing
Acceptance Testing
This document discusses Google's approach to testing software at different levels. It defines small, medium, and large tests based on their properties. Small tests are unit tests that test individual functions and classes. Medium tests test interactions between modules on a single machine. Large tests are system or integration tests that exercise complete applications and external dependencies. The document emphasizes writing many small tests and using fakes and mocks to isolate dependencies. It also discusses strategies for dealing with flaky tests, such as automatically quarantining flaky tests. Finally, it provides an example of how large tests may work at different stages from development to production.
Software testing is the process of executing a program to identify errors. It involves evaluating a program's capabilities and determining if it meets requirements. Software can fail in many complex ways due to its non-physical nature. Exhaustive testing of all possibilities is generally infeasible due to complexity. The objectives of testing include finding errors through designing test cases that systematically uncover different classes of errors with minimal time and effort. Principles of testing include traceability to requirements, planning tests before coding begins, and recognizing that exhaustive testing is impossible.
Similar to Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlassian (20)
Leveraging AI for Mobile App Testing on Real Devices | Applitools + KobitonApplitools
Explore how to use the cutting-edge integration of Visual AI from Applitools with Kobiton's real mobile device cloud to create a comprehensive solution for continuous UI testing. See more information and find the on-demand recording at applitools.com.
Visual AI for eCommerce: Improving Conversions with a Flawless UIApplitools
Discover practical, AI-driven solutions to streamline test process, maintain high-quality user experiences, and accelerate eCommerce growth. Session recording and more info at applitools.com
A Test Automation Platform Designed for the FutureApplitools
Looking for cutting-edge AI-based test automation tools to level up your SDLC today? In this webinar, we will hit reset on the industry expectations around what your tooling needs to look and act like—and give you a preview of the new product we’ve been pouring ourselves into. You will see why now is the time to shake things up and push beyond what you thought possible in your test automation practice.
Explore the capabilities of AI in software test automation and see a demonstration of how AI can be used today to significantly expand end-to-end test coverage in this session with Applitools CTO Adam Carmi. Plus, see a special sneak peek of the next great wave in test automation—autonomous testing.
More info and session materials at http://applitools.info/xe6
Test Automation at Scale: Lessons from Top-Performing Distributed TeamsApplitools
Leaders from top-performing teams share successful techniques and strategies for the implementation and execution of test automation at scale.
See the session recording and more details at http://applitools.info/k6tj
Can AI Autogenerate and Run Automated Tests?Applitools
Explore how your team can leverage AI and the combined power of GitHub and Applitools to rapidly expand your automated testing capabilities in this interactive session with GitHub’s Developer Advocate Rizel Scarlett and Software Quality Evangelist Anand Bagmar.
Session recording and more info at https://applitools.info/hdt
See a practical demonstration of:
• Streamlining test implementation and maintenance using GitHub Copilot
• How Copilot Chat can provide valuable suggestions for code improvements and refactoring
• Running automated tests automatically when code is merged to the main branch using GitHub Actions
• Self-healing your automation using the Applitools Execution Cloud and scale seamlessly with the Applitools Ultrafast Grid
• How GitHub Copilot can help developers recall syntax with different programming languages
Triple Assurance: AI-Powered Test Automation in UI Design and FunctionalityApplitools
Explore the efficiencies of test automation using the GienTech automation framework enhanced by the AI-powered Applitools platform.
Details and session recording with demonstration at https://applitools.info/j90
Navigating the Challenges of Testing at Scale: Lessons from Top-Performing TeamsApplitools
Focusing on three key areas—Test Cases, Test Data, and Test Execution—these leaders shared their experience with the tools and techniques that have proven successful in their organizations. Along the way, they discussed their journey to testing at scale and which technologies and strategies have helped them reach their goals. Plus, they talked about the new innovations their top-performing teams are pursuing.
More info and session materials at applitools.com
Introducing the Applitools Self Healing Execution Cloud.pdfApplitools
In this session with Applitools co-founder Adam Carmi, you will see the Applitools Execution Cloud in action, learn how self-healing works under the hood, and explore how you can execute your test suites in orders of magnitude faster and more stable than with any other test execution infrastructure.
Session recording and more info at https://applitools.info/ixn
Key takeaways:
• What is self-healing technology and why is it useful?
• Learn how self-healing works under the hood
• Learn how to run a Selenium test on the Applitools Execution Cloud
• Learn how to easily implement effective cross-device and browser tests
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...Applitools
The document discusses AI tools for software testing such as ChatGPT, Github Copilot, and Applitools Visual AI. It provides an overview of each tool and how they can help with testing tasks like test automation, debugging, and handling dynamic content. The document also covers potential challenges with AI like data privacy issues and tools having superficial knowledge. It emphasizes that AI should be used as an assistance to humans rather than replacing them and that finding the right balance and application of tools is important.
Collaborating From Design To Experience: Introducing CentraApplitools
Get an exclusive look at Applitools’ newest product, Centra. Centra is revolutionizing the way teams collaborate on UI by addressing one of the most challenging and important parts of the product delivery lifecycle – the handoff between designs and implementations.
With Centra, designers, developers, testers, product managers, and marketers can track, validate, and collaborate on UIs from design in Figma to implementation in a customer’s web browser, ensuring that there is no more drift between designs and development.
Don’t miss this opportunity to see how Centra can help you streamline your UI delivery process and improve collaboration within your team.
What the QA Position Will Look Like in the FutureApplitools
The quality assurance industry is constantly changing and evolving. In the future, the QA role will involve more automated tests, infrastructure knowledge, and heuristics skills. QA professionals will take on responsibilities like AI testing, web3 testing, observability, and security testing. Soft skills like communication and problem solving will also remain important as computers are still limited in replicating human interaction.
Everyone wants to make quick releases, but the look-and-feel UX validation is a manual, slow, and error-prone activity. Any UX-related issues propping up cause huge brand-value and revenue loss, may lead to social-trolling, and, even worse, dilute your user base. This is an area where AI & ML can help. In this hands-on workshop, using examples, we will explore: the importance of automated visual validation how an AI-powered tool, Applitools Visual AI, can solve this problem. Integrate Applitools Visual AI in your Selenium-Java automation and learn by practice: The different AI algorithms various Applitools capabilities and features scale your automation using the Applitools Ultrafast Grid.
Workshop: Head-to-Head Web Testing: Part 1 with CypressApplitools
The web has evolved. Finally, testing has too. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You’ll write your first test in no time, and then you’ll discover how to write a full end-to-end test for a modern web application.
From Washing Cars To Automating Test ApplicationsApplitools
Join Rex Jones II as he takes you through his inspiring career journey from washing cars to test automation. Explore his big break into testing with automation and the challenges he faced leading up to that moment.
A Holistic Approach to Testing in Continuous DeliveryApplitools
Lisa Crispin shares her experiences with striving to deploy smaller changes more frequently. Explore the useful experiments Lisa and her team used to overcome common challenges and move towards successful CD.
Anand Bagmar discusses AI-powered cross-browser testing using Applitools Visual AI. It adds artificial intelligence capabilities to functional automation testing, requiring less code while providing greater test coverage and more stable code, resulting in fewer defects in production. By using Applitools Visual AI with an ultrafast cloud, it enables AI-powered cross-browser testing with less test data, less load on environments, and less flakiness from infrastructure, browser, and network issues. The solution is described as being super fast, allowing seamless scaling, and easy to use.
Workshop: An Introduction to API Automation with JavascriptApplitools
APIs are an essential part of an increasingly large number of applications that we use daily. APIs enable applications to exchange data and functionality easily and securely. As testers, we want to ensure that our APIs do not break and provide the expected functionality. We can automate our APIs to speed up the rate at which our checks are done.
This workshop is geared toward persons who are new to API automation, who want a refresher or want to learn how to automate APIs using Supertest (a JS framework). In this workshop, you will learn how to get started with automating APIs using Supertest (a JS framework). We will be writing test automation for the restful-booker and the SpaceX-graphQL API.
The workshop will cover how to automate common API requests (GET, POST and PUT), negative tests for your API as well as check that your APIs handle errors appropriately and follow the specified schema.
During this workshop, you will also learn how to automate workflows for an API. To follow along with this workshop, Postman installed on your machine.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Transform Your Communication with Cloud-Based IVR Solutions
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlassian
1. ALEXEY SHPAKOV | SENIOR DEVELOPER
Testing hourglass
at Jira Frontend
Hi, my name is Alexey Shpakov. I work in Jira Frontend Platform team. We are responsible for builds, deployments, dev experience, and repository maintenance.
Today I am going to talk about testing in Jira Frontend.
2. Testing pyramid
e2e
unit
integration
complexity
maintenance
fragility
duration
Testing pyramid is a metaphor coined by Mike Cohn. It acknowledges the existence of different types of tests and provides a suggestion, what could be a healthy ratio between the
groups.
The pyramid starts with unit tests and suggests this should be the majority of the tests. These tests focus on testing small units, comprising the application. On the next level you
can find integration tests which test that separate units work together properly. At the top you can find end-to-end tests, which test application as a whole.
The area of the pyramid stands for the number of tests. That is, the metaphor suggests, you want to have a lot of unit tests, fewer integration tests and just a handful of end-to-end
tests.
These three categories of tests, however, differ on several parameters.
Unit tests are by far the simplest ones - they are focused on a particular functionality and mock any dependencies. This allows us to guarantee, we are testing the code at hand and
nothing else. This also means, we’ll need to modify the unit test only when corresponding functionality changes. As we mock all the dependencies, unit tests are quite stable and
fast to run.
As integration tests focus on interaction or integration within groups of units, they usually include at least two not mocked units and run different integration scenarios between
them. As the number of unit interactions grows, we get more tests, involving a given piece of code. Here we need to account for multiple units at once, which makes integration
tests more complicated. Subsequently, as any of the units change, we may need to adjust all the tests, the unit is involved into. With many real non-mocked dependencies,
integration tests are also more prone to breakages and take more time to run.
End-to-end tests work with the whole application and aim to replicate real user scenarios. This means, even before you write the first test you need to deploy the application. This
3. could sometimes be a huge effort on its own. If your service is stateful, you may need to prepopulate the database before running some of the tests. The majority of user scenarios
interact with a big number of application pieces. This means, whenever one of them is slightly modified, you may need to adjust multiple end-to-end tests. This makes end-to-end
tests high maintenance and fragile. On top of that, as you are testing a real application, the tests are susceptible to network latency and other real-world inconsistencies, which
makes them even more fragile and slow.
4. Is it good enough?
Photo by Ludovic Migneault on Unsplash
Testing pyramid is a great rule of thumb that can help structure your testing strategy.
Once you introduce different types of tests with decent application coverage, the quality will certainly go up.
But is it sufficient? Is it good enough in a long-term perspective?
5. Real world is more
complicated
Photo by Jean-Louis Paulin on Unsplash
As the web applications become more complicated and get more customers, even with a properly implemented testing strategy, it becomes increasingly hard to guarantee, that
all the possible combinations of inputs are properly tested. At that stage you will realise, that your customers unknowingly have become manual testers for your product. In fact,
some of them might even create tickets in your public ticketing system. This is bad customer service.
If you were strictly following the testing pyramid, you would start heavily invest in adding a lot more end-to-end tests in order to account for all sorts of unusual combinations of
inputs. The issue is that in production we are dealing with an exponential number of those combinations. Subsequently, you will need to have an exponential number of end-to-
end tests. These tests require a lot of effort to write and maintain and they are slow and fragile. As a result, developers will have to spend a lot of their time maintaining these tests
instead of producing new features. Our releases will get delayed because of the amount of time it takes to run all the tests.
6. Let's step back and recall, what testing is about. If you ask Google, you will get the following definition.
To test means “take measures to check the quality, performance, or reliability of (something), especially before putting it into widespread use or practice”.
The word “widespread” is important here. It is easy to assume, testing can only be performed before the code gets deployed to production. However, testing your code in production
is as important as testing it before the deployment.
7. e2e
unit
integration
deployment
PDV
monitoring
logging
Testing hourglass
At Atlassian we acknowledge that having a good testing pyramid is not enough to deliver a high-quality to the customers. We build on top of this concept and introduce an
hourglass of testing. The bottom part of the hourglass is the testing pyramid, its peak is a release deployment and the inverted triangle at the top is post-deployment verifications,
monitoring and logging in production.
Let’s have a closer look, how different parts of the hourglass are implemented in Jira Frontend.
8. Type checking
Photo by Leighann Blackwood on Unsplash
The very first level of the testing pyramid we have is type checking.
Compiled languages get it for granted. As we use javascript, we have to address type checking separately.
9. • Force new code to be covered with types
• Update flow regularly
• Generate library definitions from Typescript
Type checking
We use flow for type checking.
In an ideal case scenario, you want your whole codebase to be thoroughly typed including your dependencies.
Unfortunately, this may not always be possible. Multiple researches prove, that using typed flavours of javascript decreases the number of bugs in production. However, it may still
be a hard sell for management.
Instead of pushing for 100% type coverage, we introduced an eslint rule, that forces every new file added to the codebase to be typed. This allows us to enforce coverage of the
relevant code.
New releases of flow frequently bring breaking changes. When we bump it, we usually add ignore comments to existing violations and create tickets for teams, who own the
corresponding codebase. This forces all the new code to be compliant with the new version of flow and delegates fixing existing issues to the code owners.
For the dependencies we are using flow-typed - it’s a community-driven project to introduce flow type definitions for the libraries that don’t have them. Unfortunately, not all
dependencies have their types available via flow-typed. We implemented a separate tool to convert typescript type definitions, if they are available, into flow. It is not perfect,
because these type systems don’t map one to one well, but it is still helpful.
10. Unit tests
Photo by William Warby on Unsplash
On the second level of the pyramid we find unit tests.
11. • Mock timers and dates
• Avoid snapshot tests
• Fail on console methods
• Zero tolerance for flakes
Unit tests
We are using jest and enzyme for unit tests.
One of the biggest issues with any kind of tests is flakiness. There are several factors that could contribute to it in Javascript.
The most frequent one is javascript timers - it's crucial to mock all the timers. This makes tests more consistent and faster to execute. Another less frequent issue is hardcoded year
or month or the timezone. In this case the test passes until a certain date.
A separate type of tests is snapshot tests. This is a great feature that allows comparing big objects in one go. Unfortunately, it is frequently abused by developers. Whenever the
snapshot test fails they update the snapshots without even checking them. If possible, it is a good idea to test for specific features of the object instead and use snapshot testing as
the last resort only.
We have adjusted jest environment to throw errors every time console methods are used. Based on our experience, whenever console is invoked, this is either debug code that
should not be committed into master, a legitimate error that should be fixed or a suggestion to improve the code. In all three cases there is usually a way to fix the code, which will
improve its quality. In rare cases when console usage is expected, developers can simply mock it.
We have zero-tolerance for flaky unit tests: when the test is flaky, it is removed straight away and the owners are responsible to fix and reintroduce it. Right now this is done
manually, but we explore the means of automated flaky test detection.
12. Integration tests
Photo by Lenny Kuhne on Unsplash
There is a lot of different types of integration tests one can write. During integration tests the goal is to test, how well different parts of the system work together. If a test is no
longer a unit test, but not yet an end-to-end test, it is, probably, an integration test :) .
In Jira Frontend we have three types of integration tests: Cypress tests, visual regression tests and pact tests.
13. • Use storybooks
• Cypress allows to retry
• Notify owners about flaky tests
Cypress tests
Cypress library allows us to use browsers in the tests.
Storybook is another library, that allows us to render frontend components in isolation.
We render high-level component storybooks, mock network requests and interact with them via cypress. The fact we have no network calls and the storybooks are served locally
decreases the latency and makes the tests a lot more stable.
On top of that cypress provides a possibility to retry flaky tests out of the box.
Although it’s great to have retries available, they are masking flaky tests. Which over time will negatively impact the performance of cypress tests. To avoid that, we send slack
notifications to the owners, whose cypress tests have been retried.
14. • Use storybooks
• A separate flakiness test
• Stop css animations and mock date
• Diminishing returns
Visual regression tests
We use Applitools and storybooks for visual regression testing.
Similar to Cypress tests, we mount components in isolation and take their snapshots.
It is crucial to have VR testing flake-free. To achieve that we are running VR tests a couple of times. The first time we perform snapshot comparison against master baseline. This is
the normal VR testing result people are familiar with, where you have the same component before the changes and after the changes compared.
Afterward we run VR tests two more times, this time comparing the results against the branch baseline. As the input is the same, we expect to get no differences. If there are any
visual regressions in this case, it means some of the stories are flaky and the test fails.
To further decrease the amount of flaky VR tests, we are stopping css animations and mocking the date for all the stories we run visual regression testing for.
We introduced VR test recently. As developers opt-in more components into visual regression testing, you get diminishing returns and testing time increases. We decided to
introduce tests for high-level components first and gradually add lower-level stories, whenever it makes sense.
15. • Pact tests
• Verify and upload pacts as a part of deployment
Contract tests
The goal of contract tests is to ensure the backend doesn’t introduce the API changes, that would break consumers. And, at the same time, that consumers do not have
unreasonable expectations for the API providers.
We are using a library called pact for contract testing.
We run a separate service called pact broker. Every consumer of the API uploads its expectations to the pact broker. Whenever a new version of API is published, it is first validated
by pact broker against all the consumers. This allows us to confirm, it does not introduce breaking changes. Once the validation passes, swagger definition for the API gets
uploaded to pact broker. Similarly, whenever consumers upload their expectations, we are checking, that they match the existing API uploaded to the pact broker. This ensures the
contract stays intact.
Something we learnt the hard way is that we must both verify and upload consumer expectations as a part of the deployment.
Originally we were running pact tests as a part of build verification and uploaded consumer expectations as a part of the deployment. There could be up to 20-30 minutes between
the time we run pact tests and we deploy the pacts. If the API producers upload breaking changes during this period, it could produce a deadlock. As a result of this incident we are
running pact tests twice: as a part of the build and as a part of the deployment.
16. End-to-end tests
Photo by Matt Botsford on Unsplash
End-to-end tests are the upper part of the pyramid. This means you want a few of them.
17. • Flakiness
• High maintenance
• Unable to represent production
End-to-end tests
In Jira we used to have a lot of end-to-end tests, and struggled a lot with their flakiness and maintenance burden. Another issue we had with end-to-end tests is feature delivery.
We use feature flags to deliver features to production. Oftentimes feature flags for test environments do not match those of production. As a result, we decided to stop writing end-
to-end tests going forward and rely on post-deployment verification instead.
18. Release
Photo by Kira auf der Heide on Unsplash
The middle point of the testing hourglass is the application release.
No matter how good our tests are, things will break in production. The priorities here are to detect the issues fast, to decrease the impact and mitigate them as soon as possible.
Let’s see, how Jira Frontend release process helps with it.
19. Feature flags
Photo by Sebastiano Piazzi on Unsplash
Feature toggles or feature flags is a technique, that allows developers to modify application behaviour without modifying the code.
The idea is to introduce a new behaviour within an if-statement and pass the argument to the if statement in runtime.
For example we can use a third-party service that will return true or false for a given feature flag. Based on this value we will execute old code or new code.
20. • Feature flag for every feature
• Feature delivery tickets
• Monitor feature flag changes
• Feature flag cleanup
Feature flags
Every feature delivered to production is expected to be hidden behind the feature flag. This means the exact time a new version hits production is usually irrelevant. Developers use
a third-party service to toggle the feature flags. This allows developers to control their feature rollout.
In Jira we expect every feature flag to have a corresponding feature delivery Jira ticket. The ticket contains metadata about the feature, like feature owner, expected rollout
schedule, how the feature is monitored, what could go wrong, etc. This allows any other person unfamiliar with the feature to analyse, if the feature works well.
Which brings me to another point. In an application with hundreds of enabled feature flags it is important to track feature flag status change. In case of an incident, this will allow
to pin-point suspicious feature flag changes. And associated feature delivery ticket will give the context needed to tell, if the feature flag change could be the cause of an incident.
While feature flags are great at derisking feature delivery, in a long run they increase the amount of dead code and make the codebase harder to reason about. It is important to
cleanup feature flags, once they have been successfully rolled out. Once again, we leverage metadata to identify the feature flag owner and ping them about cleaning up the
feature flag, once it was enabled for 100% of production for a long period of time.
22. • Use the app internally
• Anyone is able to halt production rollout
Dogfooding and blockers
Some of the commonly mentioned changes that could not be covered by feature flags include build configuration changes and dependency upgrades. Sometimes developers may
forget to use a feature flag to deliver a new functionality. These changes have significant risks associated with them.
The first customers to receive a new version of Jira Cloud are Atlassian employees. We use Jira for our day-to-day activities and it is usually quite obvious, if critical functionality
doesn’t work well. Once someone notices a bug, they will create a ticket with priority “blocker” in a specific Jira project. This will halt any release promotions to production and will
allow us to avoid any customer impact.
Blockers are bad for continuous delivery - they lead to piling up of the changes. Once there is an active blocker, it becomes a priority to resolve it as fast as possible.
23. Gradual rollouts
Photo by Aliko Sunawang on Unsplash
In order to further mitigate deployment risks we introduced release soaking and canaries.
24. • 1 staging, 3 production environments
• Canaries
• 3 hours to deploy to all environments
• Frequent releases (30 per week)
Gradual rollouts
In the case of Jira Frontend we have one staging environment and three production environments. In every production environment we have canary instances. Canary instances
are Atlassian-owned tenants, which we use for active monitoring.
Whenever we deploy a release, we deploy it to the current environment and the next environment's canary instance.
At first release gets deployed to the staging environment for dogfooding and the next production environment’s canary instance. After soaking for one hour, we automatically
promote release to the first production environment and also to the next environment’s canary instance. In total it takes about 3 hours to complete the rollout of a particular
release to all production environments. We currently release 6 times a day during work days. Frequent releases allow us to keep the amount of changes delivered to production in
every release low. This results in a lower risk of breaking production.
25. A production version myth
Photo by Eric Prouzet on Unsplash
Often times in conversations QAs and SREs talk about a production configuration as something, we can use to write integration and end-to-end tests against.
This sounds reasonable. If we take production set of feature flags and apply it to the latest code in master, we will get the behaviour our customers get. This should allow us to run
all the test suits from test pyramid and assess the quality.
26. • 3+ versions in production
• Inconsistent feature flags
• Different versions of backend
A production version myth
Production, however, is more diverse.
As mentioned, we are releasing up to 6 times a day and every release takes 3 hours to go through different stages. This means at any given moment there are at least 3 different
versions of Jira run by the customers. Usually we observe more versions, because a lot of people don’t reload the page, once they open it. The behaviour of frontend also depends on
the state of feature flags for a given Jira instance. There isn’t a single “production” version of feature flags. In fact, we have hundreds of feature flags released independently by
different developers in parallel.
Jira Frontend is developed and deployed independently of the backend. As result, backend has its own release schedule and its own set of feature flags. So, even if we find two
instances of Jira having identical versions of Jira Frontend and its feature flags, they could still have different backend and behave differently.
27. Active monitoring (PDV)
Photo by Jared Brashier on Unsplash
PDV stands for post-deployment verification. The idea behind active monitoring is to simulate user behaviour and determine potential issues.
28. • Similar to e2e tests
• Run 24/7 in production
• Failure threshold
Active monitoring (PDV)
We have a separate monitoring tool that runs cypress tests on given production instances. These are the same old end-to-end tests we avoid before deployment. Let’s review the
differences.
We use production instances to run the tests. This means we are using the latest feature flag values available for the corresponding environment.
Why do we call it monitoring and not testing? These tests run non-stop, 24/7. This means whenever there’s a feature flag change, which breaks production, we get notified about it
straight away.
The monitoring system allows us to configure the failure rate threshold. For instance, we can issue an alert if the test has been failing constantly for the last ten minutes. This allows
us to significantly decrease the amount of false positives.
On the other hand, this kind of monitoring suffers from the same issues end-to-end tests do: they have high maintenance cost. Due to that we use them sparingly to monitor critical
parts of the application. For example, issue creation functionality in case of Jira.
29. Passive monitoring
Photo by Miłosz Klinowski on Unsplash
The second level of the upper part of the hourglass is monitoring.
30. • Reliable
• Alert priorities
• Runbooks
• War games
Passive monitoring
Monitoring is used to alert developers in case something goes wrong in production.
It is of extreme importance to have reliable monitoring. That is, no, or close to no false positives. This could be achieved by comparing against historical data and monitoring the rate
of change of the parameter instead of the absolute values. In general, it is better to fire an alert five minutes later, than fire a false one. Yet, sometimes this may not be enough. An
example could be a national holiday on your largest market :) . These are some of the things you get better at by iterating.
Once the alerts are configured, they should be prioritised. This will help people better understand the urgency behind the particular alerts.
For every alert there should be a runbook. It provides a detailed list of steps, which will help mitigate the alert. An alert is a stressful situation and having a runbook handy helps a
lot. An excellent idea is to put a link to the corresponding runbook into alert notification message.
War game is an exercise we run, when we come up with possible failure scenarios. The goal is to walk through every scenario and define, what we expect to see in terms of
monitoring and what will be the expected response. War games help identify missing monitoring and runbooks. After coming up with the scenarios it is advisable to pick some of
them for simulation. This will allow to verify, monitoring works as expected and runbooks contain correct steps.
31. Logging
Photo by Dorelys Smits on Unsplash
While monitoring allows us to react fast in case of incidents, it is usually quite limited in terms of the data we can pass as well as data retention time.
32. • Long data retention
• Structured information
• Do not log PII/UGC
• Data ownership
Logging
For these scenarios we use logging. It usually has a lot higher data retention capacity, which is helpful during trend analysis and incident investigation.
It is important to use structured logging. It doesn’t matter which structure to use, as long as you are using it consistently. This allows for easy search, whenever you need to debug a
production issue or asses the results of an experiment.
PII stands for personally identifiable information, UGC means user-generated content. Both are a hard “no” for logging due to potential legal consequences, like GDPR. The crux is
that often developers may not realise they might be logging those. For instance, one of the popular misconceptions we used to have among developers is that a Jira url is safe to
log. It is not, because it could contain a project key, which is UGC. Whenever any of that data gets logged, we have to purge the logs, often times together with valuable
information. In order to mitigate this risk we have implemented a separate proxy service, that preprocesses the logs and redacts known bad patterns. An example could be an
email - all the logs containing an “@“ sign are redacted in Jira.
Data ownership is something we learnt the hard way. Whenever you have a lot of products and teams sharing logging pipeline, at some point you will go over the contract limit. At
that moment, it is critical to know, who produces too many logs. So you can reach out to people and ask them to stop doing it. It also allows you to monitor the trends and reach out
to log owners preemptively.
33. You build it, you run it
Photo by Ethan Hu on Unsplash
At Atlassian we follow the “you build it you run it” approach. This means every team works on supporting the feature all the way through its lifetime, from the design and
implementation to release to production and making sure it works as intended.
34. • SLOs
• 24/7 on-call
• TechOps meetings
You build it, you run it
This includes defining Service Level Objectives, running a 24/7 on-call roster schedule and a regular TechOps meeting.
SLOs are usually defined based on existing monitoring. Whenever SLO is breached or is close to being breached, fixing it becomes the highest priority for the team.
On-call roster schedule assumes, a person on shift will be able to respond to the page within 15 minutes. Once a person is paged, they are expected to mitigate an issue using the
runbooks. If mitigation is not possible, the person on call will escalate the issue higher up, until it gets resolved.
TechOps meetings are usually held at the end of on-call shifts and allow the team to reflect on the shift results. We analyse recent alerts, trends and incidents. TechOps action
items are usually related to improving alerts reliability and addressing negative trends, before it’s too late.