Our very own Bria Grangard will take you through the ways in which you can speed up your testing process. Check it out to learn about test frameworks, automation, parallel testing and more.
Accelerating Your Test Execution PipelineSmartBear
Learn how to accelerate your test execution pipeline with test frameworks, automation and parallel testing from our very own Bria Grangard, Product Marketing Manager.
The presentation on Performance Testing of Big Data Application was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Harpreet Kaur Kahai
The presentation on HikeRunner: Load Test Framework was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Harsh Verma
QASymphony Atlanta Customer User Group Fall 2017QASymphony
Thanks to all who came out and were part of our first customer user group! All our expectations for the day were exceeded and we hope you feel the same way.
If you weren't able to make it, here's what you missed:
Judy Chung, Product Manager, gave a summary of recent and upcoming features (site level fields, new UI of TestPad) as well as a sneak preview of our newest product (codename: Automation Hub).
Elise Carmichael, VP of Quality, demo-ed several best practice topics, ranging from organizing your qTest repository to reviewing the different automation integration options.
Erika Chestnut, Director of QA at Sterling Talent Solutions, shared her story as a QASymphony customer who recently replaced HP Quality Center with qTest and provided insight into leading change management across her organization.
This presentation gives you the evidence as to why unit testing works and a process for how to bring it your team as soon as possible. There's a reason why the growth of unit testing, and automated unit testing in particular, has exploded over the past few years. It not only improves your code, it's faster than releasing code without tests. You'll learn: What, exactly, is a unit test?; The 7 reasons why managers love unit testing; and how to change mindset and processes to start unit testing now.
The presentation on Analytics Testing was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Niyati Shah
Four Practices to Fix Your Top .NET Performance ProblemsAndreas Grabner
Inefficient Database Access, Inefficien Pool usage and Sizing, Bad Synchronization, Bad Web Page Design - these are the problems that crash .NET Apps. Learn how to analyze them and fix these problems
Introduction to Continuous Delivery (BBWorld/DevCon 2013)Mike McGarr
This document provides an introduction and overview of continuous delivery. It discusses why releases are difficult, and proposes continuous delivery as an alternative approach where software is always in a releasable state and deployments can occur frequently through automation. It covers principles like automating everything and keeping the build and release process fast and reliable. Specific practices discussed include configuration management, continuous integration, testing, deployment pipelines, and deployment automation using tools like version control systems, build servers, and configuration management tools.
Accelerating Your Test Execution PipelineSmartBear
Learn how to accelerate your test execution pipeline with test frameworks, automation and parallel testing from our very own Bria Grangard, Product Marketing Manager.
The presentation on Performance Testing of Big Data Application was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Harpreet Kaur Kahai
The presentation on HikeRunner: Load Test Framework was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Harsh Verma
QASymphony Atlanta Customer User Group Fall 2017QASymphony
Thanks to all who came out and were part of our first customer user group! All our expectations for the day were exceeded and we hope you feel the same way.
If you weren't able to make it, here's what you missed:
Judy Chung, Product Manager, gave a summary of recent and upcoming features (site level fields, new UI of TestPad) as well as a sneak preview of our newest product (codename: Automation Hub).
Elise Carmichael, VP of Quality, demo-ed several best practice topics, ranging from organizing your qTest repository to reviewing the different automation integration options.
Erika Chestnut, Director of QA at Sterling Talent Solutions, shared her story as a QASymphony customer who recently replaced HP Quality Center with qTest and provided insight into leading change management across her organization.
This presentation gives you the evidence as to why unit testing works and a process for how to bring it your team as soon as possible. There's a reason why the growth of unit testing, and automated unit testing in particular, has exploded over the past few years. It not only improves your code, it's faster than releasing code without tests. You'll learn: What, exactly, is a unit test?; The 7 reasons why managers love unit testing; and how to change mindset and processes to start unit testing now.
The presentation on Analytics Testing was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Niyati Shah
Four Practices to Fix Your Top .NET Performance ProblemsAndreas Grabner
Inefficient Database Access, Inefficien Pool usage and Sizing, Bad Synchronization, Bad Web Page Design - these are the problems that crash .NET Apps. Learn how to analyze them and fix these problems
Introduction to Continuous Delivery (BBWorld/DevCon 2013)Mike McGarr
This document provides an introduction and overview of continuous delivery. It discusses why releases are difficult, and proposes continuous delivery as an alternative approach where software is always in a releasable state and deployments can occur frequently through automation. It covers principles like automating everything and keeping the build and release process fast and reliable. Specific practices discussed include configuration management, continuous integration, testing, deployment pipelines, and deployment automation using tools like version control systems, build servers, and configuration management tools.
Key takeaways
- Continuous “everything” is at the heart of agile and devops
- Continuous activities result in faster delivery and higher quality
- Rapid feedback and practice are essential for confidence in your delivery process
View webinar recording - http://testhuddle.com/resource/continuous-everything/
The document discusses various aspects of automating software testing. It suggests automating the detection of flaky tests, determining the severity of test failures, converting tests to more isolated unit tests, and using usage data to determine what to test next. It emphasizes that while automation can improve testing efficiency, human oversight is still needed, and code reviews serve as the link between automated and manual processes.
Development is inherently collaborative. So why aren't you doing code review? This session discusses the importance of collaboration around your source code, the impact code review can have on development teams, and offers guidance on how to get started.
Atlassian Speaker: Matt Quail
Customer Speaker: Patrick Coleman of Dash
Key Takeaways:
* Peer code review explained
* Benefits and approaches to effective code review
Test Driven Development on Android (Kotlin Kenya)Danny Preussler
This document discusses test-driven development (TDD) and its application to Android development. It begins with an introduction to TDD, outlining its core principles and benefits. It describes the "red, green, refactor" process and emphasizes writing tests before code. It addresses challenges with testing Android code, such as dependencies on framework classes, and recommends strategies like wrapping classes to isolate dependencies. Finally, it outlines the benefits of TDD such as reduced bugs, improved design, and increased productivity over the long run despite initial slower development.
Azure DevOps Realtime Work Item Sync: the good, the bad, the ugly!Lorenzo Barbieri
This document discusses syncing work items between multiple Azure DevOps team projects. It describes a scenario where work items from a master team project are synced one-way to a derived team project. The solution uses web hooks and REST APIs to sync work item creates, updates, and deletes. It also discusses syncing test runs and results between the projects. The document notes both benefits and limitations of this approach, such as the lack of documentation for APIs and issues with syncing additional work item fields and artifacts.
Agile Testing in Enterprise: Way to transform - SQA Days 2014Andrey Rebrov
This document discusses problems that can occur with traditional testing approaches and how to transition to agile testing practices. It provides two examples of organizations that struggled with long regression cycles, missed estimates, low quality and stress. The root causes are identified as document-based collaboration, lack of testing knowledge by developers, and infrastructure management chaos. Recommendations are made to use Kanban, collaborate on requirements, implement smart metrics, test automation, and a DevOps approach. Specific practices that were implemented include risk management, specification by example, test-driven development, continuous integration, configuration automation, and test automation. The results were increased delivery rates up to 5 times, zero bugs in production, no overtime, and more enjoyable work.
End-to-end performance testing, profiling, and analysis at RedisFilipe Oliveira
High-performance (as measured by sub-millisecond response time for queries) is a key characteristic of the Redis database, and it is one of the main reasons why Redis is the most popular key-value database in the world.
To continue improving performance across all of the different Redis components, we've developed a framework for automatically triggering performance tests, telemetry gathering, profiling, and data visualization upon code commit.
In this short presentation, we describe how this type of automation and "zero-touch" profiling scaled our ability to pursue performance regressions and to find opportunities to improve the efficiency of our code, helping us (as a company) to start shifting from a reactive to a more proactive performance mindset.
The document discusses test management struggles and challenges in the software development life cycle (SDLC). It outlines three main challenges: 1) too much workload for reporting and manually linking test cases to incident tickets, 2) difficulty managing requirements and test cases and utilizing testing activities, and 3) difficulty completing automation tasks on time. It proposes solutions such as reducing reporting time, linking items automatically, improving test case management tools, and prioritizing automation.
How Netflix tests in production to augment more traditional testing methods. This talk covers the Simian Army (Chaos Monkey & friends, code coverage in production, and canary testing.
As companies mature their software development practices, automated acceptance-level testing is becoming more commonplace. In particular, Cucumber and its Gherkin-based equivalents are enjoying widespread use. Through observing and facilitating the adoption and implementation of Cucumber test suites, I have found ways in which the technology has helped teams greatly, but I have also found ways in which it hindered them. I realized that Cucumber and its kin are appropriate tools in fewer situations than the ones in which they are currently employed. In other words, many teams that use such frameworks need to reevaluate whether they are right for the job, and perhaps replace them. I invite all involved in automated acceptance testing to attend as I try to build a compelling case for this notion.
Definition of Done and Product Backlog refinementChristian Vos
The document discusses product backlog refinement and the definition of done in agile software development. It emphasizes that product backlog refinement is an important meeting to clarify and estimate user stories and work items to have a ready backlog for iteration planning. It also stresses that having a clear definition of done helps improve team quality, transparency for stakeholders, better release planning, and minimizing risks. Regular product backlog refinement coupled with a well-defined definition of done are key practices for achieving agility.
'An Evolution Into Specification By Example' by Adam KnightTEST Huddle
For the last four years myself and my colleagues at RainStor have been evolving a process for testing a structure data archiving system in an Agile development environment. In this talk I will discuss the evolution of a team from a rudimentary Agile implementation on an unreleased product, to our current process which uses the fundamental elements of Specification By Example to successfully deliver software functionality across 30 different platform/backend configurations to a series of high profile and demanding customers. Last year our company was used as a case study for successful implementation in Gojko Adzic's book on Specification By Example.
My report will discuss the lessons learned during the early implementation and the challenges faced in moving away from a compressed waterfall approach. Through a process of incremental change we have identified and tackled the fundamental issues that undermined the development effort as a team. I’ll describe some of the mistakes made in attempting to implement a more formal process of requirements documentation into an Agile implementation and the benefits we uncovered on moving to a more flexible user story based approach. I’ll also discuss some of the issues around trying to implement user stories in a server system with no GUI and very technical and performance based requirements.
Raising the importance of quality and the status of testing both within the development team and the organisation as a whole has allowed the challenges facing the team to be recognised and respected. The result has been a more collaborative approach taken between developers and testers both through “collaborative specification” of user stories and tackling the problems that impact the delivery of value to the customers. I also plan to discuss how we’ve expanded from documenting acceptance criteria for each user story such that we now document Criteria, Assumptions and Risks for each feature and, rather than a ‘Done/Not Done’ approach how we identify the confidence in each of these categories to measure the confidence we have in each new feature being implemented.
Having the test team as an involved and influential team through the entire development process has also allowed us to implement a number of testability features to help to make the product more testable. I will discuss the benefits of having development understand and prioritise testability issues with some illustrative examples.
I will discuss the challenges and benefits of developing our own metadata driven test harnesses as opposed to an off the shelf solution. I’ll detail how having control over these harnesses has allowed us to work towards a self documenting test system using realistic customer examples as “Automated Specifications” of the RainStor system allowing us to explain current behaviour to Product Management in terms of well understood customer scenarios.
From Sage 500 to 1000 ... Performance Testing myths exposedTrust IV Ltd
The following presentation is an account of Sage migration we were involved with. Written by Head of Service Delivery, Richard Bishop, the presentation looks at the performance issues faced during a migration of Sage 500 to Sage 1000. Richard also looks to dispel ‘myths’ that are commonly associated with performance testing.
For more information visit Trust IV online - http://trustiv.co.uk/ or check out our blog - http://blog.trustiv.co.uk/
FishEye opens your source code repository and helps development teams keep tabs on what's going on using a web interface.
Crucible is a peer code review tool that allows teams to review, edit, comment and record outcomes.
Spec By Example or How to teach people talk to each otherAndrey Rebrov
This document introduces an approach called "Spec By Example" to improve communication between developers, QA analysts, and clients. It involves impact mapping to focus on user stories, QA and analyst pairing to create examples to describe requirements, and diverse and merge sessions for the team to collaboratively build out examples. The examples are then optimized by compressing tables and introducing parameters before being linked to automated tests through a behavior driven development approach. This unified process allows requirements, test cases, and code to have a single source of truth, makes it easy to trace work back to business needs, and improves estimation, demos, and reduces rework and issues.
DevOpsGuys Performance Testing with APM Tools workshopDevOpsGroup
A set of static slides that accompanied a "Live Demo" of using APM tools (AppDynamics) during load testing to isolate a performance issue, fix it, de-deploy and compare the improvement. This presentation accompanies the workshop held at the NCC Group Web Performance event in March 2014. Videos should be available on the NCC Group Community website - http://community.nccgroup-webperf.com/
ATAGTR2017 Machine Learning telepathy for Shift Right approach of testingAgile Testing Alliance
The presentation on Machine Learning telepathy for Shift Right approach of testing was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Santhosh GS
This document discusses Session Based Test Management (SBTM) as a way to manage exploratory testing in an agile context. SBTM involves running tests in sessions of fixed length with goals and strategies. Key aspects of SBTM include planning test sessions in sprints, tracking session charters and bugs on a scrum board, reporting on the health of the product daily, and using a dashboard to visualize test information. Benefits of SBTM include improved visibility of testing velocity, better communication, and bringing testing "out of the dark."
The presentation on What Lies Beneath Robotics Process Automation was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter :Aditya Garg & Brijesh Deb
Successfully Implementing BDD in an Agile WorldSmartBear
At Agile Testing Days 2018, Bria Grangard, Product Marketing Manager at SmartBear, presented on shift left, behavior driven development (BDD), and how they can work together to improve your software development lifecycle.
Advanced A/B Testing at Wix - Aviran Mordo and Sagy Rozman, Wix.comDevOpsDays Tel Aviv
While A/B test is a very known and familiar methodology for conducting experiments on production when you do that on a large scale it has many challenges in the organization level and operational level.
At Wix we are practicing continuous delivery for over 4 years. Conducting A/B tests and writing feature toggles is at the core of our development process. However when doing so on a large scale, with over 1000 experiments every month, it holds many challenges and affect everyone in the company, from developers, product managers, QA, marketing and management.
In this talk we will explain what is the lifecycle of an experiment, some of the challenges we faced and the effect on our development process.
* How an experiment begins its life
* How an experiment is defined
* How do you let non technical people control the experiment while preventing mistakes
* How an experiment go live, what is the lifecycle of an experiment from beginning to end
* What is the difference between client and server experiments
* How do you keep the user experience and not confuse them
* How does it affect the development process
* How can QA test an environment that changes every 9 minutes
* How can support help users when every user may be part of different experiment
* How can we find if an experiment is causing errors when you have millions of permutations [at least 2^(number of active experiments)]
* What are the effects of always having multiple experiments on system architecture
* What are the development patterns when working with AB test
At Wix we have developed our 3rd generation experiment system called PETRI, which is (will be) open sourced, that helps us maintain some order in a chaotic system that keep changing. We will also explain how PETRI works, what are the patterns in conducting experiments that will have a minimal effect on performance and user experience.
Key takeaways
- Continuous “everything” is at the heart of agile and devops
- Continuous activities result in faster delivery and higher quality
- Rapid feedback and practice are essential for confidence in your delivery process
View webinar recording - http://testhuddle.com/resource/continuous-everything/
The document discusses various aspects of automating software testing. It suggests automating the detection of flaky tests, determining the severity of test failures, converting tests to more isolated unit tests, and using usage data to determine what to test next. It emphasizes that while automation can improve testing efficiency, human oversight is still needed, and code reviews serve as the link between automated and manual processes.
Development is inherently collaborative. So why aren't you doing code review? This session discusses the importance of collaboration around your source code, the impact code review can have on development teams, and offers guidance on how to get started.
Atlassian Speaker: Matt Quail
Customer Speaker: Patrick Coleman of Dash
Key Takeaways:
* Peer code review explained
* Benefits and approaches to effective code review
Test Driven Development on Android (Kotlin Kenya)Danny Preussler
This document discusses test-driven development (TDD) and its application to Android development. It begins with an introduction to TDD, outlining its core principles and benefits. It describes the "red, green, refactor" process and emphasizes writing tests before code. It addresses challenges with testing Android code, such as dependencies on framework classes, and recommends strategies like wrapping classes to isolate dependencies. Finally, it outlines the benefits of TDD such as reduced bugs, improved design, and increased productivity over the long run despite initial slower development.
Azure DevOps Realtime Work Item Sync: the good, the bad, the ugly!Lorenzo Barbieri
This document discusses syncing work items between multiple Azure DevOps team projects. It describes a scenario where work items from a master team project are synced one-way to a derived team project. The solution uses web hooks and REST APIs to sync work item creates, updates, and deletes. It also discusses syncing test runs and results between the projects. The document notes both benefits and limitations of this approach, such as the lack of documentation for APIs and issues with syncing additional work item fields and artifacts.
Agile Testing in Enterprise: Way to transform - SQA Days 2014Andrey Rebrov
This document discusses problems that can occur with traditional testing approaches and how to transition to agile testing practices. It provides two examples of organizations that struggled with long regression cycles, missed estimates, low quality and stress. The root causes are identified as document-based collaboration, lack of testing knowledge by developers, and infrastructure management chaos. Recommendations are made to use Kanban, collaborate on requirements, implement smart metrics, test automation, and a DevOps approach. Specific practices that were implemented include risk management, specification by example, test-driven development, continuous integration, configuration automation, and test automation. The results were increased delivery rates up to 5 times, zero bugs in production, no overtime, and more enjoyable work.
End-to-end performance testing, profiling, and analysis at RedisFilipe Oliveira
High-performance (as measured by sub-millisecond response time for queries) is a key characteristic of the Redis database, and it is one of the main reasons why Redis is the most popular key-value database in the world.
To continue improving performance across all of the different Redis components, we've developed a framework for automatically triggering performance tests, telemetry gathering, profiling, and data visualization upon code commit.
In this short presentation, we describe how this type of automation and "zero-touch" profiling scaled our ability to pursue performance regressions and to find opportunities to improve the efficiency of our code, helping us (as a company) to start shifting from a reactive to a more proactive performance mindset.
The document discusses test management struggles and challenges in the software development life cycle (SDLC). It outlines three main challenges: 1) too much workload for reporting and manually linking test cases to incident tickets, 2) difficulty managing requirements and test cases and utilizing testing activities, and 3) difficulty completing automation tasks on time. It proposes solutions such as reducing reporting time, linking items automatically, improving test case management tools, and prioritizing automation.
How Netflix tests in production to augment more traditional testing methods. This talk covers the Simian Army (Chaos Monkey & friends, code coverage in production, and canary testing.
As companies mature their software development practices, automated acceptance-level testing is becoming more commonplace. In particular, Cucumber and its Gherkin-based equivalents are enjoying widespread use. Through observing and facilitating the adoption and implementation of Cucumber test suites, I have found ways in which the technology has helped teams greatly, but I have also found ways in which it hindered them. I realized that Cucumber and its kin are appropriate tools in fewer situations than the ones in which they are currently employed. In other words, many teams that use such frameworks need to reevaluate whether they are right for the job, and perhaps replace them. I invite all involved in automated acceptance testing to attend as I try to build a compelling case for this notion.
Definition of Done and Product Backlog refinementChristian Vos
The document discusses product backlog refinement and the definition of done in agile software development. It emphasizes that product backlog refinement is an important meeting to clarify and estimate user stories and work items to have a ready backlog for iteration planning. It also stresses that having a clear definition of done helps improve team quality, transparency for stakeholders, better release planning, and minimizing risks. Regular product backlog refinement coupled with a well-defined definition of done are key practices for achieving agility.
'An Evolution Into Specification By Example' by Adam KnightTEST Huddle
For the last four years myself and my colleagues at RainStor have been evolving a process for testing a structure data archiving system in an Agile development environment. In this talk I will discuss the evolution of a team from a rudimentary Agile implementation on an unreleased product, to our current process which uses the fundamental elements of Specification By Example to successfully deliver software functionality across 30 different platform/backend configurations to a series of high profile and demanding customers. Last year our company was used as a case study for successful implementation in Gojko Adzic's book on Specification By Example.
My report will discuss the lessons learned during the early implementation and the challenges faced in moving away from a compressed waterfall approach. Through a process of incremental change we have identified and tackled the fundamental issues that undermined the development effort as a team. I’ll describe some of the mistakes made in attempting to implement a more formal process of requirements documentation into an Agile implementation and the benefits we uncovered on moving to a more flexible user story based approach. I’ll also discuss some of the issues around trying to implement user stories in a server system with no GUI and very technical and performance based requirements.
Raising the importance of quality and the status of testing both within the development team and the organisation as a whole has allowed the challenges facing the team to be recognised and respected. The result has been a more collaborative approach taken between developers and testers both through “collaborative specification” of user stories and tackling the problems that impact the delivery of value to the customers. I also plan to discuss how we’ve expanded from documenting acceptance criteria for each user story such that we now document Criteria, Assumptions and Risks for each feature and, rather than a ‘Done/Not Done’ approach how we identify the confidence in each of these categories to measure the confidence we have in each new feature being implemented.
Having the test team as an involved and influential team through the entire development process has also allowed us to implement a number of testability features to help to make the product more testable. I will discuss the benefits of having development understand and prioritise testability issues with some illustrative examples.
I will discuss the challenges and benefits of developing our own metadata driven test harnesses as opposed to an off the shelf solution. I’ll detail how having control over these harnesses has allowed us to work towards a self documenting test system using realistic customer examples as “Automated Specifications” of the RainStor system allowing us to explain current behaviour to Product Management in terms of well understood customer scenarios.
From Sage 500 to 1000 ... Performance Testing myths exposedTrust IV Ltd
The following presentation is an account of Sage migration we were involved with. Written by Head of Service Delivery, Richard Bishop, the presentation looks at the performance issues faced during a migration of Sage 500 to Sage 1000. Richard also looks to dispel ‘myths’ that are commonly associated with performance testing.
For more information visit Trust IV online - http://trustiv.co.uk/ or check out our blog - http://blog.trustiv.co.uk/
FishEye opens your source code repository and helps development teams keep tabs on what's going on using a web interface.
Crucible is a peer code review tool that allows teams to review, edit, comment and record outcomes.
Spec By Example or How to teach people talk to each otherAndrey Rebrov
This document introduces an approach called "Spec By Example" to improve communication between developers, QA analysts, and clients. It involves impact mapping to focus on user stories, QA and analyst pairing to create examples to describe requirements, and diverse and merge sessions for the team to collaboratively build out examples. The examples are then optimized by compressing tables and introducing parameters before being linked to automated tests through a behavior driven development approach. This unified process allows requirements, test cases, and code to have a single source of truth, makes it easy to trace work back to business needs, and improves estimation, demos, and reduces rework and issues.
DevOpsGuys Performance Testing with APM Tools workshopDevOpsGroup
A set of static slides that accompanied a "Live Demo" of using APM tools (AppDynamics) during load testing to isolate a performance issue, fix it, de-deploy and compare the improvement. This presentation accompanies the workshop held at the NCC Group Web Performance event in March 2014. Videos should be available on the NCC Group Community website - http://community.nccgroup-webperf.com/
ATAGTR2017 Machine Learning telepathy for Shift Right approach of testingAgile Testing Alliance
The presentation on Machine Learning telepathy for Shift Right approach of testing was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Santhosh GS
This document discusses Session Based Test Management (SBTM) as a way to manage exploratory testing in an agile context. SBTM involves running tests in sessions of fixed length with goals and strategies. Key aspects of SBTM include planning test sessions in sprints, tracking session charters and bugs on a scrum board, reporting on the health of the product daily, and using a dashboard to visualize test information. Benefits of SBTM include improved visibility of testing velocity, better communication, and bringing testing "out of the dark."
The presentation on What Lies Beneath Robotics Process Automation was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter :Aditya Garg & Brijesh Deb
Successfully Implementing BDD in an Agile WorldSmartBear
At Agile Testing Days 2018, Bria Grangard, Product Marketing Manager at SmartBear, presented on shift left, behavior driven development (BDD), and how they can work together to improve your software development lifecycle.
Advanced A/B Testing at Wix - Aviran Mordo and Sagy Rozman, Wix.comDevOpsDays Tel Aviv
While A/B test is a very known and familiar methodology for conducting experiments on production when you do that on a large scale it has many challenges in the organization level and operational level.
At Wix we are practicing continuous delivery for over 4 years. Conducting A/B tests and writing feature toggles is at the core of our development process. However when doing so on a large scale, with over 1000 experiments every month, it holds many challenges and affect everyone in the company, from developers, product managers, QA, marketing and management.
In this talk we will explain what is the lifecycle of an experiment, some of the challenges we faced and the effect on our development process.
* How an experiment begins its life
* How an experiment is defined
* How do you let non technical people control the experiment while preventing mistakes
* How an experiment go live, what is the lifecycle of an experiment from beginning to end
* What is the difference between client and server experiments
* How do you keep the user experience and not confuse them
* How does it affect the development process
* How can QA test an environment that changes every 9 minutes
* How can support help users when every user may be part of different experiment
* How can we find if an experiment is causing errors when you have millions of permutations [at least 2^(number of active experiments)]
* What are the effects of always having multiple experiments on system architecture
* What are the development patterns when working with AB test
At Wix we have developed our 3rd generation experiment system called PETRI, which is (will be) open sourced, that helps us maintain some order in a chaotic system that keep changing. We will also explain how PETRI works, what are the patterns in conducting experiments that will have a minimal effect on performance and user experience.
Curiosity and Xray present - In sprint testing: Aligning tests and teams to r...Curiosity Software Ireland
This webinar was co-hosted by Xray and Curiosity Software on 18th May 2021. Watch the on demand recording here: https://opentestingplatform.curiositysoftware.ie/xray-in-sprint-testing-webinar
In-sprint testing must tackle three pressing problems:
1. You must know exactly what needs testing before each release. There’s not time to test everything.
2. You need up-to-date and aligned test assets, including test cases, data, scripts and CI/CD artefacts.
3. Test teams must know what needs testing, when, and have on demand access to environments, tests and data.
These problems are near-impossible to crack at organisations who struggle with application complexity, rapid system change, and overly-manual testing processes. Challenges include:
1. Test creation time. Manually creating test cases, data and scripts is slow and unsystematic, resulting in low coverage tests.
2. Slow test maintenance. Changes break tests, with little time in sprints to check test cases, scripts, and data.
3. Knowing when testing is “done”. There is little measurability or peace of mind when systems “go live”.
This webinar will set out how maintaining a “digital twin” of the system under test prioritises testing time AND maintains rigorous tests in-sprint. You will see how:
1. Intuitive flowcharts generate optimised test cases, scripts, and data.
2. Feeding changes into the models maintains up-to-date tests.
3. Pushing the tests to agile test management tooling then makes sure that teams know which tests to run, when, with full traceability and a measurable definition of ‘done’.
James Walker, Curiosity’s Director of Technology, and Sérgio Freire, Head of Product Evangelism for Xray, will set out this cutting-edge approach to in-sprint testing. Günther-Matthias Bär, Test Automation Engineer at Sogeti, will then draw on implementation experience to discuss the value of the proposed approach.
SOASTA Webinar: Process Compression For Mobile App Dev 120612SOASTA
1. The webinar discussed continuous integration and automation practices for mobile development and testing. It focused on how to automate testing to keep up with the pace and scale of mobile development.
2. Speakers from Atlassian, Zephyr, and SOASTA discussed how tools like Bamboo and CloudTest can help automate builds, testing, and monitoring to fail faster and achieve continuous delivery of mobile apps.
3. The webinar emphasized that manual testing cannot keep up with the pace of mobile development and highlighted principles of continuous integration like building and testing code frequently and leveraging automation.
Arthur Hicken Chief Evangelist of Parasoft @ PSQT 2016 discusses:
• What the shift from automated to
continuous means
• How disruption requires changes to how
we test software
• Addressing gaps between Dev and Ops
• Technologies that enable Continuous
Continuous delivery requires more that DevOps. It also requires one to think differently about product design, development & testing, and the overall structure of the organization. This presentation will help you understand what it takes and why one would want to deliver value to your customers multiple times each day. #CIC
Jeff "Cheezy" Morgan Ardita Karaj
Testing for Logic App Solutions | Integration MondayBizTalk360
In this Integration Monday session, Mike discussed the challenges and approaches for some of the common testing scenarios when delivering integration solutions with Microsoft Azure.
Встреча "QA: в каких направлениях может найти себя тестировщик?"GoIT
19.12.2014 в креативном пространстве "Часопыс" состоялась очередная встреча от проекта GoIT, посвященная "вечному". Наши любимые преподаватели и менторы доносили следующее:
• Виды QA и специфика работы в каждом из этих направлений;
• Необходимые вспомогательные навыки, которыми должен обладать тестировщик;
• Новинки мира QA.
Наши спикеры:
Николай Ковш - QA Engineer в Ciklum, которому успешно удалось перейти в сферу IT из маркетинга. Расскажет о необходимости тестировщикам уметь программировать.
Алла Пенальба - QA Lead в компании invisibleCRM, работала в компании ПИКСУС, 4 года проживала в Бельгии, где работала Mobile QA Engineer.
Марина Шевченко - Mobile QA Engineer в Ciklum. QA с опытом тестирования веб, десктопных и мобильных приложений. Расскажет о специфике тестирования мобильных приложений.
Александр Майданюк - Head of Quality Assurance Solution в компании Ciklum. Занимал должности QA Lead, Manager, QA Consultant и Trainer. Эксперт и судья QA секции чемпионата UA Web Challenge. Соучредитель Киевского клуба тестировщиков QA Club.
BTD2015 - Your Place In DevTOps is Finding Solutions - Not Just Bugs!Andreas Grabner
This is about leveling-up and REVOLUTIONIZING Testing as part of your Agile/DevOps Transformation.
You can contribute more than testing functionality. You need to Level-Up your skill set by understanding the apps you are testing. # Images, # JS Files, # SQL Statements, Connection Pool Utilization and Garbage Collection Activity have to be added to your portfolio.
Check these metrics when you do your functional testing and report regressions to your engineers even though the functionality is still good. But you just uncovered an Architectural regression that will lead to a scalabilty and performance problem.
Finding these problems early will eliminate a lot of wasted and unplanned time later on in the lifecycle. that is your contribution to delivering software faster with better quality
5 Steps to Jump Start Your Test AutomationSauce Labs
With the acceleration of software creation and delivery, test activities must align to the new tempo. Developers need immediate feedback to be efficient and correct defects as those are introduced. The path to achieving this vision is to build a reliable and scalable continuous test solution.
All beginnings are hard. Having a well-defined plan outlining the approach for your organization to create test automation is key to ensure long term success. Join Diego Molina, Senior Software Engineer at Sauce Labs as he discusses:
The importance of setting up the team correctly from the start
Choosing the right Testing Framework for your organization
Identifying the right scenarios and workflows to test
Learning to avoid common pitfalls at the beginning of the transformation journey
The document discusses how agile practices can help testing teams work more efficiently. It promotes collaboration between teams using tools, shifting testing left to catch bugs earlier, automating tests to improve quality and speed, and monitoring applications in production. The key takeaways are to start small by experimenting with agile in some teams, measure performance, expand practices iteratively, and address any barriers to cross-team collaboration.
Load testing with Visual Studio and Azure - Andrew SiemerAndrew Siemer
In this presentation we will look at what web performance testing is and the various types of testing that can be performed. We will then dig into Visual Studio 2013 Ultimate to see that the Visual Studio platform is now a real contender in performance testing automation. And we will see how the Visual Studio integration with Visual Studio Online and Azure can take your web performance tests and spin up impressive load tests in a truly useful way.
Nitisak Mooltreesri from DST Worldwide Services spoke about automated load testing for continuous delivery. He discussed how load testing is important to find bugs under high user loads. His company performs daily automated performance tests using simulation approaches to test incomplete systems cheaply and reliably. This helps reduce performance issues by providing early feedback and catching problems before deployment.
Using Crowdsourced Testing to Turbocharge your Development TeamRainforest QA
Developer-owned QA testing is becoming more common as many organizations shift to leaner development processes and eschew traditional QA strategies.
This presentation discusses how crowdsourced testing can help teams offload repetitive testing work and streamline Agile testing processes. It also demonstrates how Rainforest Developer Experience (DevX) allows developers to increase productivity and minimize testing time with workflow-native crowdsourced testing.
Interested in seeing how Rainforest has helped companies save dev time and QA spend? Check out these success stories!
Guru: http://hubs.ly/H06lwC60
America's Test Kitchen: http://hubs.ly/H06lCX50
Despite the belief that a shared context and collaboration drives quality, too often, software testers and quality professionals struggle to find their place within today's integrated agile teams. This session is a practitioner’s view of testing and testing practices within an iterative/incremental development environment. We will begin with a discussion of some of the challenges of testing within an agile environment and delve into the guiding principles of Agile Testing and key enabling practices. Agile Testing necessitates a change in mindset, and it is as much, if not more, about behavior, as it is about skills and tooling, all of which will be explored.
Software Quality and Test Strategies for Ruby and Rails ApplicationsBhavin Javia
This document provides an overview of software quality and test strategies for Ruby and Rails applications. It discusses the importance of quality, managing quality through setting goals and measuring metrics. It outlines a test strategy template and covers test types, tools, and approaches for unit, integration, acceptance and other types of tests in Ruby/Rails. It also discusses test data management, defect management, and the Ruby/Rails testing ecosystem including various testing frameworks and quality/metrics tools.
In this Quality Assurance Training session, you will learn about Automation Tools Overview. Topic covered in this session are:
• SQL Basic Operators and Function
• Software Testing Tool – Overview
• Advantage- Automation
• Disadvantage - Automation
• Grouping of Automation Tool
• Functional Tool
• Source Code Testing Tool
• Performance Tool
• Test Management Tool
• Security Testing Tool
For more information, about this quality assurance training, visit this link: https://www.mindsmapped.com/courses/quality-assurance/software-testing-training-with-hands-on-project-on-e-commerce-application/
In this session you will learn:
Software Testing Tool – Overview
Advantage- Automation
Disadvantage - Automation
Grouping of Automation Tool
Functional Tool
Source Code Testing Tool
Performance Tool
Test Management Tool
Security Testing Tool
For more information: https://www.mindsmapped.com/courses/quality-assurance/qa-software-testing-training-for-beginners/
Similar to Accelerating Your Test Execution Pipeline (20)
With special guests Ron Ratovsky and Darrel Miller from the OpenAPI Initiative's Technical Steering Committee, this SmartBear webinar session covered the history of Swagger and the OpenAPI Specification, and all the latest changes in OAS 3.1.
IATA Open Air: How API Standardization Enables Innovation in the Airline Indu...SmartBear
The necessity of surviving during the economic upheaval of a global pandemic is fueling innovation in the airline industry. A new age of aviation is being built on digital technology and APIs to improve data sharing, reduce costs, and optimize revenue for carriers.
API standards are the key to the success of any digital initiative, enabling interoperability between independent parties. The International Air Transport Association (IATA), the industry trade association responsible for developing the global standards for airlines, are utilizing SwaggerHub, the API design and documentation platform, to help bring these best practices to life.
In this webinar session, we explore:
How IATA’s Open Air initiative allows the industry to open up its digital capabilities for innovation
Open Air standard as the common technical approach to describing API definitions
Best practices for scaling API design and standardization across the industry
A live API design demonstration with SwaggerHub and IATA
The State of API 2020 Webinar – Exploring Trends, Tools & Takeaways to Drive ...SmartBear
Since 2016, SmartBear has been surveying the State of APIs to better understand the trends and technologies associated with this essential digital building block. We have just completed the State of API 2020 survey and will be sharing the research findings during this live webinar.
We will be sharing research from over 2,000 respondents on how organizations are bringing APIs to market in 2020, what tools they are using, how they view certain trends, and where they see the market going.
How LISI Automotive Accelerated Application Delivery with SwaggerHubSmartBear
In this SmartBear webinar, Sebastien Gadot presents on how his team at LISI Automotive got started with the open source Swagger tools and moved to SwaggerHub to speed up their application delivery.
Standardising APIs: Powering the Platform Economy in Financial ServicesSmartBear
In this webinar session, SmartBear and SWIFT discuss the importance of API standardisation and the role it plays in the new platform economy in the financial services industry.
Getting Started with API Standardization in SwaggerHubSmartBear
This document provides an overview of a presentation on standardizing API documentation using SwaggerHub. The agenda includes an introduction to SmartBear and their tools, why standardization is critical for API quality, defining quality for teams, challenges of OpenAPI Specification development at scale, and how SwaggerHub can help address those challenges. It discusses how SwaggerHub provides a central hub for designing, documenting, and collaborating on APIs to improve efficiency, quality and reduce defects.
Adopting a Design-First Approach to API Development with SwaggerHubSmartBear
This document discusses adopting a design-first approach to API development using SwaggerHub. It outlines the risks of a code-first approach, such as inconsistencies across teams and building the wrong thing. A design-first approach encourages early discussion with stakeholders. SwaggerHub helps with this approach by providing tools for documentation, collaboration, API modeling and prototyping, virtualization, and code generation to generate client SDKs and server stubs from the API design.
Standardizing APIs Across Your Organization with Swagger and OAS | A SmartBea...SmartBear
In this webinar session, we showed why API standardization is important and how your organization can use SwaggerHub to overcome the most common challenges with making the move to the OpenAPI Specification.
The document discusses effective API lifecycle management using the OpenAPI Specification (OAS). It describes the stages of an API lifecycle including design, development, testing, deployment and versioning. It identifies challenges around collaboration, documentation, security and testing. It recommends using OAS to drive quality at all stages and details how OAS can help with versioning, automation, change management, extensibility, reusability, compatibility and verifiability. The key takeaways are to not reinvent the wheel, prepare for changes, have a process, and put quality at the center of the API lifecycle.
The API Lifecycle Series: Exploring Design-First and Code-First Approaches to...SmartBear
This document discusses design-first and code-first approaches to API development. It explores how existing services can leverage the OpenAPI Specification (OAS) and the benefits of each approach. Design-first allows for a single source of truth across design, development, testing and documentation. It enables early feedback and iteration. Code-first treats OAS as a byproduct of development and enables existing practices, but requires more customization. The document provides examples of how teams have implemented both approaches using SmartBear tools.
The API Lifecycle Series: Evolving API Development and Testing from Open Sour...SmartBear
This document summarizes an upcoming webinar on evolving API development and testing. The webinar will discuss:
- Getting started with the OpenAPI Specification (OAS) and functional API testing using open source tools
- The challenges of OAS development at scale including having specs in multiple places, collaboration needs, and integrating development into delivery pipelines
- When open source tools are no longer sufficient and it's time to move to pro tools, such as when dynamic test data is needed, testing multiple environments, and including tests in CI/CD pipelines
Artificial intelligence for faster and smarter software testing - Galway Mee...SmartBear
How Artificial Intelligence (AI) is changing software quality
Hybrid test automation framework to test identified and unidentified UI properties
Demonstration of a use case with AI in UI test automation for any skill level
Successfully Implementing BDD in an Agile WorldSmartBear
This document provides an overview of successfully implementing Behavior Driven Development (BDD) in an agile environment. It discusses shifting testing left by involving testers earlier in the development process. The document then covers the key aspects of a BDD process including discovery workshops to understand requirements, writing examples and scenarios in a Given/When/Then format, automating scenarios, and using continuous integration to ensure tests always pass. It emphasizes that adopting BDD requires changes to people, processes, and tools to facilitate collaboration between all teams.
The Best Kept Secrets of Code Review | SmartBear WebinarSmartBear
In this webinar session, we share a comprehensive list of peer code review best practices, distilled down years of SmartBear research and case studies. At the end, we shared how our code and document review tool, Collaborator, can help teams put these tactics into practice.
How Capital One Scaled API Design to Deliver New Products FasterSmartBear
This document outlines an approach for scaling API development across a large enterprise financial institution. It proposes establishing a Platform Services Center of Excellence to define API governance and design standards. The COE would provide training, mentorship, and reviews to coaches in each line of business to ensure APIs adhere to standards and are high quality. This centralized model aims to scale API development while maintaining quality, enabling faster delivery of new products.
This document discusses using TestComplete to automate testing in non-GUI environments by leveraging PuTTY to interact with Linux servers. The author was able to scale their testing efforts, ensure accuracy, and reduce testing time from 3 days to 1.5 days by developing a common PuTTY library and test framework in TestComplete. Key aspects included launching PuTTY, sending commands, validating results, and logging detailed information for troubleshooting. This allowed complicated testing to be completed more quickly and fit within a DevOps pipeline.
This document discusses script extensions in TestComplete, which allow users to extend the functionality of the software. Script extensions can create custom record/design time actions, test operations, results operations, and script objects. Script objects are useful for encapsulating code into reusable libraries. Extensions help solve problems like maintaining modularized code across projects and providing building blocks for rapid test development. The document demonstrates how to create a script object extension.
BDD can help save Agile by facilitating better collaboration through conversations, concrete examples, and test-driven development. BDD practices like discovering and automating desired system behaviors through examples and tests improve communication between team members. This leads to a shared understanding and living documentation, helping teams work together more effectively. Automated tests also allow for safer refactoring and help teams stay agile by ensuring code quality is maintained.
API Automation and TDD to Implement Master Data Survivorship RulesSmartBear
The document discusses implementing and testing data survivorship rules (DSRs) for master data using API automation and test-driven development (TDD). It notes the challenges of testing the complex scenarios involving DSRs across different fields, field types, and data operations. The solution involved creating a testing matrix and using the ReadyAPI tool to drive development and prevent defects through an automated test-first approach. This allowed the DSR project to be completed in the shortest sprint yet with no functional issues reported after release.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
2. @Bria_Grangard
Who Am I?
• SmartBear Software
• Automated UI functional testing tools & test
management solutions
• Stay in Touch!
• @Bria_Grangard
• Education
• Went to Dartmouth: AB in Engineering, BE in
Biomedical Engineering, MEM with a healthcare focus
• What do I love to do?
• Run, dance, play board games (Settlers of Catan
anyone?)
4. We provide tools for development, testing, and operations teams
to create great software, faster than ever.
AccelerateSDLCWorkflows | ImproveQualityatEveryStage | RealizeRapidTime-to-Value
• HQ in Boston, MA, USA, with 7 offices globally
• Founded in 2009
• Open Source Innovator (Swagger & SoapUI)
6.5M+
Users
194
Countries
22K+
Companies
TestComplete
SoapUI Pro
SwaggerHub
CrossBrowserTesting
QAComplete
AlertSite
5. UI
API
Create Great Software, Without Tradeoffs
Perform Code &
Doc Review
Collaborator
Design, Develop, &
Document APIs
SwaggerHub
DEV TEST OPS
Code
Create Automated UI
Functional Tests
(Web, Desktop, Mobile)
TestComplete Script
Run Tests On Real
Devices in the Cloud
CrossBrowserTesting
Create Web Load Tests
LoadComplete
Spec
Create Automated API
Functional Tests
(REST, SOAP, and more)
SoapUI Pro Script
Virtualize API &
Database Services
ServiceV Pro
Create API Load Tests
LoadUI Pro
Monitor Web & API Performance,
Availability, & Functional Correctness
AlertSite
Manage Manual & Automated Tests
QAComplete
Integrations …100 +
6. | SB Test
Page
What’s Going on in the
Testing World?
@Bria_Grangard
BDD
AI
Machine
Learning
DevOps
Shift Left
Agile
Automation
7. Page
There are bottlenecks in today’s development processes.
• Irontriangletrade-off
• Teams today are constantly feeling pressure to deliver software faster, without compromising quality
• Automationmightrampup
• There is only a certain point as to how scalable automation can be
• Testenvironmentsareoftentheroot-causeofthebottlenecks
• They are very time consuming and costly
@Bria_Grangard
8. Page
@Bria_Grangard
The promise of the new software delivery cycle
WaterfallAgile
Design Build Test Implement
DevOps
Week1 Week2 Week3 Week4
9. Page
@Bria_Grangard
Time Consuming Nature Of Web Testing
#ofTests
0
200
400
600
800
1000
1200
MVP Feature
Set 1
Feature
Set 2
Feature
Set 3
V2
More Features = More Testing
Age Of Product
#ofBrowsers
0
2
4
6
8
10
12
MVP Feature
Set 1
Feature
Set 2
Feature
Set 3
V2
Popularity of Product
13. Page
The Basics of a Test Framework
@Bria_Grangard
Requirements Tests Defects
What do we
make and how
should it
behave
Make sure it
works as
stated in the
Requirement
Definition Sets Environments
Actual Results
do not equal
Expected
Results
14. Page
@Bria_Grangard
What is a Test Framework?
A Test Framework:
Links tests to other SDLC items
Is NOT a Test Automation Framework but often contains one
Allows for rapid creation of tests from reusable components
Separates data from logic (REUSABILITY)
Provides a standardized test “language” and reporting structure for an
application under test
15. Page
@Bria_Grangard
Elements of a Test Framework
• Library: A repository of all your decomposed scripts, separated into their components
• Test Data Sources: A repository of all data sources
• Helper Functions: A repository of all decomposed test scripts, automated or manual,
that are inputs or checks
• Test Environments: A list of all covered testing environments, broken out by type (OS,
browsers…)
• Modules: The combination of library items with any helper functions and test data
sources–plus environments
• Structure / Hierarchies: The “folder” structure of modules
16. Level 2: Figure Out What Tests
Should be Automated
@Bria_Grangard
17. Page
There are many types of testing that need to be done…
@Bria_Grangard
Browser: Chrome HTML5, Angular JS Network Service/API/Database
End-to-End Testing
19. Page
@Bria_Grangard
A Little Manual v Automated Math
Product v2
# of Test Cases
# of Browsers Supported
Total Test Cases
Avg Test Run Time
Product v2
1,000
10
10,000
.5 Min
Total Test Time 83 Hrs
With 2 QA engineers,
That is 1 week of testing
# of Test Cases
# of Browsers Supported
Total Test Cases
Avg Test Run Time
1,000
10
10,000
4 Min
Total Test Time 666 Hrs
With 5 manual testers, that is
3.5 weeks of testing
20. Page
Decide on What to Automate
• Environment Setup/Teardown
• DataEntry
• FormFilling
• Varyingdatainputs inarepetitive process
• Exposingbackend data(APIs,DBtable,etc.)
• Repetitive/boring tasksthat arepronetoinattention errors
• Taskswithhighreusevalue acrossmanyworkflows
• Testswithtimingorscreenresponsivenessasacriteriaforsuccess
• Manynon-functionaltesttypes,suchasperformancetesting
• Capturing Results
@Bria_Grangard
21. Page 21
@Bria_Grangard
Speeding Up Your Pipeline
Longer
Time To Test Behind Releases
Shorter
Manual Testing Record & Replay Unit Testing POM Atomic Testing ContinuousTesting
22. PagePage
But my Dev team says I have
days to test, not weeks…
@Bria_Grangard
24. Page
Let’s Go Faster!
We can,withparalleltesting
@Bria_Grangard
Test 2Test 2Test 2
Test 3 Test 3 Test 3
Running tests sequentially, we were able
to run our tests in 1 week
Test 1Test 1 Test 1
Test 1
Test 1
Test 1
Test 2
Test 2
Test 2
Test 3
Test 3
Test 3
With 20 Parallel executions,
we can run our entire test suite
in only 4 hours
25. Page
Types of test to run in parallel
Testingacrossdifferentbrowsersanddevicesisoneofthemosttime
consuming aspectsoftestingthefrontendofyourwebsiteorweb
application. Runmoretests,against morebrowserconfigurations by
running theminparallel.
According tothetestingpyramid,Unittestsshould beyourmost
abundant testtypeinyourentiretestingsuite.Because ofthis,running
14,000 unittestsinunderanhourisreallyonlypossible withamassive
parallel testinginfrastructure investment.
Needtogetyourminimumtestingdoneinthenext20minuteswhileyou
pushahotfix?Onlywaytodothatistoruntheminparallel, allowing you
togetthemosttestingdoneintheshortestamountoftime.
Because deployments arehappening atsucharapidpace,regression
testingisonofthebestwaystohaveatypeof“testingversioncontrol”
makingsurethefunctionality ofthenewbuild, matchesthatofthelast
stablebuild. Running thesetestsinparallel allows moretobetested.
Cross Browser Testing
Unit Testing
Regression Testing
Smoke Testing
Most effectivetests tosee ROI fromfor paralleltesting
@Bria_Grangard
26. Page
Benefits of Parallel Testing
1. Quick Deployments
2. Faster Feedback
3. Cross Browser Testing
4. Better Test coverage
5. Saves Valuable Time
@Bria_Grangard
With the advent of the Agile Manifesto and the “maturation” of DevOps over the past few years, it seems that the promise of continuous deployments at a speed faster than light is almost fulfilled. Everywhere we look, development and ops team brag about how many times they ship a week, a day, or even an hour.
There isn’t less testing happening in agile or devops, the phase has simply been broke down, sped up, and spread out.
To add to the innate problem of speed in the software delivery cycle, there are a few speed bumps unique to testing web applications.
As our product matures, we’ll have an increasing number of test cases to run in regression before each deployment. This can add exponential amounts of testing – and test execution time – in just a few months. Precious time most products do not have.
Also, as our product gains in popularity, we’ll have to support an increasing amount of browsers and devices. Customers now want to use their personal iPads to access your web portal that just 5 years ago was only accessed by the latest version of IE on…yep, you guessed it. Windows XP Service Pack 3. This again ads massive time to the time needed to sufficiently test across needed browsers.
LEAD: There are only a few ways to radically change your testing time to keep up with modern development.
There are only a few ways to radically change your testing time to keep up with modern development.
You can just test less. This results in more bugs, and we’d not recommend this strategy.
You can also increase the amount of testers on your team to keep up with demand.
OR
You can diversify your testing suite with a mix of unit, API, and UI tests.
And run tests in parallel, executing tests in continuous batches of 10, 20, 50 tests at one time.
Let’s star high level with the software development lifecycle. While you all are probably very familiar with this let’s talk about some points.
Requirements. Where it all begins. What do we want to make, what will we make, and how should whatever we make really behave.
Need to have a solid foundation for success… you need to know what your requirements are and what you want them to accomplish. No confusion. It’s kind of like taking that moment to prepare for your essay before you write it.
For Tests you can have multiple types of tests.
Test Definition
Test set
What environments you want to run the test on-operating systems, browsers, resolutions, etc.
Defects—it’s inevitable. Your tests are going to find some bugs and there will be some problem. But if you have a defect you want to be able to log it properly and tie it to the correct test and requirement.
Releases—all of these things are encapsulated in a release. What are you pushing out for the next major or minor release/build.
There is some feedback you get only from a UI layer test. So while you should minimize the efforts of UI testing, you still need to have these tests to complete end-to-end testing workflows. In this case, the test starts at the browser level: Mozilla, chrome, firefox whatever you’re using. It then touchses on HTML5 or Angular JS depending on what you're using. It then touches on network level testing and services/API/Database test.
Mike Cohn’s Test Automation Pyramid is usually the go-to guide for deciding on how many tests should be automated.
(After discussing this in much detail as well as the recent debate)
The best way to incorporate a myriad of testing practices into your development cycle is to find and implement the right tools. Utilizing source control management tools (SCMS) such as Git or Subversion, CI tools like Jenkins, or defect management tools such as JIRA or Bugzilla, can speed up your processes as they integrate with a wide variety of automated test tools and provide more poignant feedback on which part of a test is failing.
In this image we are showing the evolution of the entire development pipeline.—transforming from manual testing to continuous testing. Even with the benefits a diversified testing process can provide, test automation and easier to debug pieces, there is still a cap on time to test and time to release. To reach the final two phases on the right, atomic testing and continuous testing, teams will have to run their tests in parallel. In most cases, this will require teams to move their testing to the cloud for cost and speed reasons.
Let’s take an example…
Let’s imagine a real world example: in release 3 of your product, you have 8 hours of sequential regression testing to perform before the team feels confident to deploy.
By release 5, this may be twice the amount of hours you’ll need to run your tests and as a bonus, your product is getting popular and is being used by more users on an increasing number of different devices. Before you were testing for only Chrome and FireFox, but now you see that you need Android and iOS devices, Safari and multiple versions of Internet Explorer. So you have 16 hours of tests and 10 different devices or browser to cover.
This would take us 160 hours for complete test coverage before our deployment. With parallel testing environments, we can run our 16 hours of tests on 10 different devices at the same time, saving us 146 hours of testing time.
Time is finite. So we need to maximize the time we have. Parallel testing allows you to test faster with a quicker turn around in deployments. No developer wants to spend more time testing their product than they did developing it.
How can parallel testing be done? By setting up multiple VMs and other infrastructure devices or by using a cloud test service.
Cross Browser Tests
Regression tests--regression testing is essential between deployments. The process is vital to ensure that a piece of software is still functional after a new update has been made prior to the next shipment. A challenge with running regression tests is the limited amount of time often provided and the growing nature of your test suite as your application matures. Developers looking to push their product out faster may give very little time between when they’ve finished their build process and the production date to conduct any testing. Running through the regression suite quickly with concurrent tests will get build engineers any necessary feedback faster and will keep your DevOps teams happy.
Unit Testing: unit tests are a great choice to run in parallel because they are naturally small since they test very particular or niche functions of an application, but can number in the thousands to tens of thousands.
Smoke tests or build verification testing: is the final testing practice teams should consider implementing in parallel. Since smoke testing focuses on ensuring that only the most important functions of an application work as expected, it is typically used to decide whether or not to proceed with further testing. A smoke test that passes is an indicator to go ahead with more testing. If it fails, teams will stop testing and ask for a new build with the initial required fixes.
Smoke testing is usually done to ensure that the minimal viable amount of testing is done with any quick fixes going into production immediately. Smoke tests can be conducted manually or through automation, but parallel testing will enable the process to go faster. Testing the few business critical processes, such as log-ins and checking out,. To get the product out the door in under five minutes can bring in revenue and make customers happy. This ensures enough time to test as thoroughly as they would like in between deployments—parallel testing accelerates this.
Quick deployments—run as many tests as you want concurrently whether it’s 2, 10, 50, or 100.
Faster feedback—the faster you run your tests, the faster the feedback. In older development models and testing practices it could take weeks to receive feedback once a developer built a feature and sent it across to QA. By that time, developers have moved on and may not remember what the function that failed was. Without writing a comment for every piece of code, this can be a real challenge to work with. “shifting left” “continuous testing”
Cross Browser Testing—this is drastically more achievable with parallel testing. Typically, expanding coverage to incorporate all necessary environments can become very challenging and time consuming. With parallel testing you can test against multiple browsers and browser types all in a shorter timeframe than if you were running tests sequentially.
Better Test Coverage—enabling teams to test more in less time inherently means better test coverage. Environments and platforms
Saves valuable time