This document discusses implementing team wide testing to improve quality and reduce bugs. It describes current problems like developers feeling pressure to deliver code quickly without proper testing. This leads to bugs found later by testers, wasting time on rework. The team analyzed root causes like lack of test automation and testers. They decided to break down silos between developers and testers. The new process involves test-driven development, continuous testing, and demos at quality gates. While not all user stories were completed, the delivered stories had no bugs found by clients, showing the new process improved quality.
This presentation wants to share our experience on forming an integrated Development/QA team in Perficient projects applying Scrum, and some of our best practices on securing high quality.
Scrum teams use burn down chart to represent/track the iteration progress, and the most common burn down chart is the time-based one. But when doing that our team got some problems, it's not accurate to use time-based burn down to represent the true velocity and the feature completion. We experienced the situation that the team-velocity was pretty good, which means team could "burn" enough hours, while we didn't DELIVER as many feature comparing with the burn down. This topic is a case study based on what we did trying to resolve our problems.
TuleapCon 2019. Tuleap Trackers, when one size does not fit allTuleap
Have you ever dream you can customize as you (really) want your trackers? Say goodbye to waiting for administrator approval. With Tuleap tracking system, you can configure your project trackers, yourself, at project level. Fine-grained permissions, workflow and triggers, field dependencies, specific user groups, you get the full control. Sounds too good to be true? Come to this talk, you’ll get your smile back.
We play here familiar scenarios, unadapted, frustrating ones and we’ll show you how they can be fixed with Tuleap trackers configuration settings.
QA Strategies for Testing Legacy Web AppsRainforest QA
Paul Miles, Software Development Manager at NPR, discusses QA strategies and tools his team uses to address the challenge of maintaining legacy products at NPR.
In this presentation, he covers:
- How to effectively strategize what types of tests to add to legacy software
- What cost-effective tools and testing strategies you can adopt in your organization
- Approaches about how to incorporate testing into your organization’s build pipelines
- How to foster testing centric culture in your organization
TuleapCon 2019. Tuleap explained by the usersTuleap
What could be more tangible than explaining Tuleap by the users themselves? This track gives the floor to the ones who are working with Tuleap day after day. Whatever your profile, you will understand how much your job will become easier.
- Tuleap as a Developer
- Tuleap as an IT Ops
- Tuleap as a Service Manager
This presentation wants to share our experience on forming an integrated Development/QA team in Perficient projects applying Scrum, and some of our best practices on securing high quality.
Scrum teams use burn down chart to represent/track the iteration progress, and the most common burn down chart is the time-based one. But when doing that our team got some problems, it's not accurate to use time-based burn down to represent the true velocity and the feature completion. We experienced the situation that the team-velocity was pretty good, which means team could "burn" enough hours, while we didn't DELIVER as many feature comparing with the burn down. This topic is a case study based on what we did trying to resolve our problems.
TuleapCon 2019. Tuleap Trackers, when one size does not fit allTuleap
Have you ever dream you can customize as you (really) want your trackers? Say goodbye to waiting for administrator approval. With Tuleap tracking system, you can configure your project trackers, yourself, at project level. Fine-grained permissions, workflow and triggers, field dependencies, specific user groups, you get the full control. Sounds too good to be true? Come to this talk, you’ll get your smile back.
We play here familiar scenarios, unadapted, frustrating ones and we’ll show you how they can be fixed with Tuleap trackers configuration settings.
QA Strategies for Testing Legacy Web AppsRainforest QA
Paul Miles, Software Development Manager at NPR, discusses QA strategies and tools his team uses to address the challenge of maintaining legacy products at NPR.
In this presentation, he covers:
- How to effectively strategize what types of tests to add to legacy software
- What cost-effective tools and testing strategies you can adopt in your organization
- Approaches about how to incorporate testing into your organization’s build pipelines
- How to foster testing centric culture in your organization
TuleapCon 2019. Tuleap explained by the usersTuleap
What could be more tangible than explaining Tuleap by the users themselves? This track gives the floor to the ones who are working with Tuleap day after day. Whatever your profile, you will understand how much your job will become easier.
- Tuleap as a Developer
- Tuleap as an IT Ops
- Tuleap as a Service Manager
Using Crowdsourced Testing to Turbocharge your Development TeamRainforest QA
Developer-owned QA testing is becoming more common as many organizations shift to leaner development processes and eschew traditional QA strategies.
This presentation discusses how crowdsourced testing can help teams offload repetitive testing work and streamline Agile testing processes. It also demonstrates how Rainforest Developer Experience (DevX) allows developers to increase productivity and minimize testing time with workflow-native crowdsourced testing.
Interested in seeing how Rainforest has helped companies save dev time and QA spend? Check out these success stories!
Guru: http://hubs.ly/H06lwC60
America's Test Kitchen: http://hubs.ly/H06lCX50
Including automation testing in definition of done is becoming critical for organisations. Implementing with proper approach and utilisation of different resources with different tools in key area to focus.
Implementing Automation in Definition of Done is Team Work.
These slides where the background of my lighting talk about Definition of Done at Agilopolis Community Day #2. For more information about Agilopolis visit http://www.agilopolis.com
Moving QA from Reactive to Proactive with qTestQASymphony
An overview of QASymphony's qTest product suite and product roadmap, including how qTest continues to push forward in the areas of agile testing, exploratory testing, BDD, automation integration, quality metrics and applied AI for testing, and how QASymphony is working to help test teams transition from reactive to proactive QA.
There in an obsessions to jump to implementation of CI, CD tools when we talk about DevOps. In this talk, I focus on the many aspects that one needs to focus on when going on a DevOps journey
If you are like most test driven developers, you write automated tests for your software to get fast feedback about potential problems. Most of the tests you write will verify the functional behaviour of the software: When we call this function or press this button, the expected result is that value or that message.
But what about the non-functional behaviour, such as performance: When we perform this query the expected speed of getting results should be no more than that many milliseconds. It is important to be able to write automated performance tests as well, because they can give us early feedback about potential performance problems. But expected performance is not as clear-cut as expected results. Expected results are either correct or wrong. Expected performance is more like a threshold: If the performance is worse than this, we want the test to fail.
Break Up the Monolith- Testing Microservices by Marcus MerrellSauce Labs
Microservices is more than a buzzword: it’s an industry-wide tidal wave. Companies are spending millions to break up monoliths and spin up microservices, but they usually only involve QA at the very end. This talk by Marcus Merrell centers around real-world experiences and will pose questions that attendees can ask their developers/product people, and offer solutions for teams to help make your service more discoverable, more testable, and easier to release.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
QASymphony Atlanta Customer User Group Fall 2017QASymphony
Thanks to all who came out and were part of our first customer user group! All our expectations for the day were exceeded and we hope you feel the same way.
If you weren't able to make it, here's what you missed:
Judy Chung, Product Manager, gave a summary of recent and upcoming features (site level fields, new UI of TestPad) as well as a sneak preview of our newest product (codename: Automation Hub).
Elise Carmichael, VP of Quality, demo-ed several best practice topics, ranging from organizing your qTest repository to reviewing the different automation integration options.
Erika Chestnut, Director of QA at Sterling Talent Solutions, shared her story as a QASymphony customer who recently replaced HP Quality Center with qTest and provided insight into leading change management across her organization.
Tilt does not currently employ any quality engineers. How can we deliver quality software? Over the last year the organization has gone from terrifying deploys (followed by
Using Crowdsourced Testing to Turbocharge your Development TeamRainforest QA
Developer-owned QA testing is becoming more common as many organizations shift to leaner development processes and eschew traditional QA strategies.
This presentation discusses how crowdsourced testing can help teams offload repetitive testing work and streamline Agile testing processes. It also demonstrates how Rainforest Developer Experience (DevX) allows developers to increase productivity and minimize testing time with workflow-native crowdsourced testing.
Interested in seeing how Rainforest has helped companies save dev time and QA spend? Check out these success stories!
Guru: http://hubs.ly/H06lwC60
America's Test Kitchen: http://hubs.ly/H06lCX50
Including automation testing in definition of done is becoming critical for organisations. Implementing with proper approach and utilisation of different resources with different tools in key area to focus.
Implementing Automation in Definition of Done is Team Work.
These slides where the background of my lighting talk about Definition of Done at Agilopolis Community Day #2. For more information about Agilopolis visit http://www.agilopolis.com
Moving QA from Reactive to Proactive with qTestQASymphony
An overview of QASymphony's qTest product suite and product roadmap, including how qTest continues to push forward in the areas of agile testing, exploratory testing, BDD, automation integration, quality metrics and applied AI for testing, and how QASymphony is working to help test teams transition from reactive to proactive QA.
There in an obsessions to jump to implementation of CI, CD tools when we talk about DevOps. In this talk, I focus on the many aspects that one needs to focus on when going on a DevOps journey
If you are like most test driven developers, you write automated tests for your software to get fast feedback about potential problems. Most of the tests you write will verify the functional behaviour of the software: When we call this function or press this button, the expected result is that value or that message.
But what about the non-functional behaviour, such as performance: When we perform this query the expected speed of getting results should be no more than that many milliseconds. It is important to be able to write automated performance tests as well, because they can give us early feedback about potential performance problems. But expected performance is not as clear-cut as expected results. Expected results are either correct or wrong. Expected performance is more like a threshold: If the performance is worse than this, we want the test to fail.
Break Up the Monolith- Testing Microservices by Marcus MerrellSauce Labs
Microservices is more than a buzzword: it’s an industry-wide tidal wave. Companies are spending millions to break up monoliths and spin up microservices, but they usually only involve QA at the very end. This talk by Marcus Merrell centers around real-world experiences and will pose questions that attendees can ask their developers/product people, and offer solutions for teams to help make your service more discoverable, more testable, and easier to release.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
QASymphony Atlanta Customer User Group Fall 2017QASymphony
Thanks to all who came out and were part of our first customer user group! All our expectations for the day were exceeded and we hope you feel the same way.
If you weren't able to make it, here's what you missed:
Judy Chung, Product Manager, gave a summary of recent and upcoming features (site level fields, new UI of TestPad) as well as a sneak preview of our newest product (codename: Automation Hub).
Elise Carmichael, VP of Quality, demo-ed several best practice topics, ranging from organizing your qTest repository to reviewing the different automation integration options.
Erika Chestnut, Director of QA at Sterling Talent Solutions, shared her story as a QASymphony customer who recently replaced HP Quality Center with qTest and provided insight into leading change management across her organization.
Tilt does not currently employ any quality engineers. How can we deliver quality software? Over the last year the organization has gone from terrifying deploys (followed by
Shift left, shift right the testing swing.
This deck shows the testing framework we use today in our agile & Devops team. We do Behavior Driven Development (Shift left) and test in production as well (shift right).
This presentation is about unit tests, integration tests, REST tests, code coverage and analysis tools, code reviews and other tools that help achieve high-level results.
This presentation by Ilya Tsvetkov (Associate Manager, GlobalLogic) was delivered at GlobalLogic Java Conference in Krakow on December 12, 2015.
This presentation is a part of the COP2271C college level course taught at the Florida Polytechnic University located in Lakeland Florida. The purpose of this course is to introduce Freshmen students to both the process of software development and to the Python language.
The course is one semester in length and meets for 2 hours twice a week. The Instructor is Dr. Jim Anderson.
A video of Dr. Anderson using these slides is available on YouTube at: https://www.youtube.com/watch?v=c2CTDm19Lpg
This slide deck was used at the Global Scrum Gathering in Prague in 2015. The deck provides inspiration on:
* How to make the tester part of the Development Team
* How to eliminate the need for "Quality Control"
* Foster collaboration within the team.
Start with passing tests (tdd for bugs) v0.5 (22 sep 2016)Dinis Cruz
"Turning TDD upside down - For bugs, always start with a passing test" - Common workflow on TDD is to write failed tests. The problem with this approach is that it only works for a very specific scenario (when fixing bugs). This presentation will present a different workflow which will make the coding and testing of those tests much easier, faster, simpler, secure and thorough'
Presented at LSCC (London Software Craftsmanship Community) http://www.meetup.com/london-software-craftsmanship on sep 2016.
How Agile Coaches should work closely with HR team to transform the culture, organization, management practices, and rive changes within the organization.
Performance Appraisal was designed as a motivation technique. However, in many organizations, the employees consider the performance appraisal demotivating instead. In order to learn why the performance appraisal is not helping as expected, and how possibly we could do something different to motivate our Scrum teams, we revisit the history of when/why we started doing performance appraisal, and share some of the best practices how NOT to do traditional performance appriasal.
As an Agile Coach, you need a toolbox with a variety of tools and technologies you could consider using when you work with people from different background facing different challeges.
20 ways to run retrospective differentlyEthan Huang
Facilitating Retrospective Meetings is one of the most important but most challenging tasks for ScrumMasters. In the past 14 years, we came up with 20 different methods to inspire the Scrum discover opportunities to develop and improve.
User Story Cycle Time - An Universal Agile Maturity MeasurementEthan Huang
Trying to define a comprehensive CMMI like Agile Maturity Model?
If you're running all Scrum meetings but cannot deliver every sprint, you're not agile at all, if you don't follow any Scrum format but you're delivering small features every couple of weeks you're still Agile - deliver the highest value in the shortest time.
User Story Cycle Time - one universal Agile maturity measurement you might be able to use in your Organization cross different teams.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
5. Tester A:
• Look at that bug; it’s pretty
straightforward that the
functionality doesn’t match
our test case. Why can’t
somebody do a quick smoke
test before checking in the
code?
Developer A:
• Well, yes I agree that’s a bug.
But we didn’t have enough
time, you know, the schedule
is tough, we did as much
verifications as we could
before we checked the code
in; but we didn’t have
enough time to cover that
functionality. It’s great that
the testing team found that
bug, we can fix it later.
A real conversation
6. Tester B (Test Lead) :
• But that costs a lot, we
spent a whole day to
manually execute all the
functional test cases and
found at least 5 obvious
bugs. They could be
identified even without
looking at the test cases.
Now we need another day
for regression test after
your team gets them fixed.
Developer B (Develop Lead) :
• But that’s the reality, isn’t
it? It’s normal to have bugs.
We cannot avoid delivering
bugs together with the
code. That’s why we have a
testing team.
A real conversation
7. Find
A BUG
1 h 2 h 4 h 2 h
1 d
1 h2 d1 d0.5 d
AT LEAST 1 week
Fix
This Bug
Smoke
Testing
Generate
A Dev Build
Push
To Test
Bug
Verification
Regression
Testing
Push
To Staging
UATPush
To Production
Rework/Cost
16. Team did root cause analysis
• 1 Tester cannot complete all testing work
• We might have to shrink testing phase
• Big, complicated features - long Dev cycle
needed to deliver one feature
• Huge Regression Testing effort needed to
cover legacy features as well
• Has no Requirements details , only
mockups
• Don’t know what details to
implement/write test cases
• Lots of dependencies – hard to test
17. 80%
20%
• 1 Tester cannot complete all testing work
• We might have to shrink testing phase
• Big, complicated features - long Dev cycle
needed to deliver one feature
• Huge Regression Testing effort needed to
cover legacy features as well
• Has no Requirements details , only
mockups
• Don’t know what details to
implement/write test cases
• Lots of dependencies – hard to test
Team did root cause analysis - voted
18. Team decisions before kicking off
• Break the team silos – Team Wide Testing
• Do things right the first time – Create fewer bugs
19. Team
• Developers to be involved into all QA activities
• Let the only Tester organize the whole team
20. Process
• We don’t do waterfall
• We don’t do small waterfalls iteratively either
22. Activities
• Represent Requirement using UAT Cases
• Write Automation Tests before development
• Test Driven Development
• CCR + Local Verification
• Check-In, CI + Continuous Automated Testing
• Daily Verification/Daily Demo
• Do UAT every Iteration
26. Two Quality Gates
• Represent Requirement using UAT Cases
• Write Automation Tests before development
• Test Driven Development
• CCR + Local Verification - Quality Gate 1
• Check-In, CI + Continuous Automation Testing
• Daily Verification/Daily Demo - Quality Gate 2
• Do UAT every Iteration
29. But for those stories we delivered,
the client couldn't find even
ONE BUG
30. Takeaways
A new Team Model integrates Developers and Testers
A new Lifecycle Model integrates Development and Testing
New Development activities driven by Tests
http://blogs.perficient.com/multi-shoring/blog/author/ehuang/
View my posts on Perficient official blog: