EuroSTAR Software Testing Conference 2008 presentation on The Truth About Model-Based Quality Improvements by Bart Knaack. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Christian Bk Hansen - Agile on Huge Banking Mainframe Legacy Systems - EuroST...TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Agile on Huge Banking Mainframe Legacy Systems by Christian Bk Hansen. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Elise Greveraars - Tester Needed? No Thanks, We Use MBT!TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Tester Needed? No Thanks, We Use MBT! by Elise Greveraars. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Lauri Pietarinen - What's Wrong With My Test DataTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on What's Wrong With My Test Data by Lauri Pietarinen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Doron Reuveni - The Mobile App Quality Challenge - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on The Mobile App Quality Challenge by Doron Reuveni. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Mickiel Vroon - Test Environment, The Future Achilles’ HeelTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Test Environment, The Future Achilles’ Heel by Mickiel Vroon. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Fredrik Rydberg - Can Exploratory Testing Save Lives - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Can Exploratory Testing Save Lives by Fredrik Rydberg. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Christian Bk Hansen - Agile on Huge Banking Mainframe Legacy Systems - EuroST...TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Agile on Huge Banking Mainframe Legacy Systems by Christian Bk Hansen. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Elise Greveraars - Tester Needed? No Thanks, We Use MBT!TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Tester Needed? No Thanks, We Use MBT! by Elise Greveraars. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Lauri Pietarinen - What's Wrong With My Test DataTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on What's Wrong With My Test Data by Lauri Pietarinen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Doron Reuveni - The Mobile App Quality Challenge - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on The Mobile App Quality Challenge by Doron Reuveni. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Mickiel Vroon - Test Environment, The Future Achilles’ HeelTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Test Environment, The Future Achilles’ Heel by Mickiel Vroon. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Fredrik Rydberg - Can Exploratory Testing Save Lives - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Can Exploratory Testing Save Lives by Fredrik Rydberg. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Vipul Kocher - Software Testing, A Framework Based ApproachTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Software Testing, A Framework Based Approach by Vipul Kocher. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...TEST Huddle
This document discusses using functional test automation to prevent defects from escaping the development phase. It recommends automating acceptance tests during development to catch bugs early from the user perspective. The process involves preparing for automation by exploring and selecting test candidates, automating the tests as close to development as possible, and repeating the automation across areas, platforms and versions to prevent regression bugs. Continuous integration and handling test errors are also suggested to provide feedback and react to issues identified through automation. The overall goal is to shift testing left in the development cycle through early and frequent automation from a user perspective.
'Customer Testing & Quality In Outsourced Development - A Story From An Insur...TEST Huddle
RSA Scandinavia implemented a new test model to standardize testing across outsourced development projects. The model uses a risk-based approach and V-model framework. It defines requirements for test planning, design, execution, reporting, and responsibilities between RSA and suppliers. The implementation involved communicating the new model, providing training, and integrating it into project and contracting processes. Today, the model is used for all projects and is helping to streamline quality monitoring, reporting, and knowledge sharing across the organization and its suppliers.
'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...TEST Huddle
In this presentation you will learn about how the testing process and continuous quality improvements are aligned to the scrum process in a large software project. We hope that our hands -on experience will give you inspiration on how to tailor the test process in an agile environment. The project has been running for more than two years, with six successful releases to end users. We would like to share our experiences with managing test processes in a large scrum project – our do’s and don’ts, our success stories and also our lessons learned. The project is the largest scrum project in Norway to date.
The project scope is to implement system support for managing a new pension reform for all inhabitants in Norway that are members of the pension fund, and replacing existing system due to outdated technology. Approximately 750 000 project hours will be spent and between 100-180 people are involved in the project: thirteen scrum teams, plus two project management and acceptance testing teams, and one business expert team. Each scrum team contains all the knowledge and expertise needed for developing high quality software: Scrum master, business expert, technical architect, UX designer, developers, build/deploy responsible, and of course, dedicated test resources.
Each software delivery in this project contains five sprints. Each sprint is three weeks, followed by acceptance testing before the delivery is shipped. Test driven development is used in all levels of development, from unit tests all the way up to functional system testing. All test levels up to system integration testing is performed during the development sprint by the scrum teams. We tried to automate UI tests, but this was not successful. However, tests in all other levels are successfully automated, and after each delivery, a fully automated regression test suite is shipped with the code.
Rob Baarda - Are Real Test Metrics Predictive for the Future?TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Are Real Test Metrics Predictive for the Future? by Rob Baarda. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
'Growing to a Next Level Test Organisation' by Tim KoomenTEST Huddle
Many organisations start improving their testing by implementing some kind of line organisation for testing (test expertise center, test service center), hereafter called TEC. Although a good starting point for improvements, in practice the TEC is often not much more than a resource pool of testers, possibly supplying certain templates or giving advice to projects.
A next maturity level for a TEC is to grow to a test factory, responsible for delivering pre-agreed test results.From the experiences gathered mostly from a large railroad infrastructure organisation, this presentation shows the path to this next level of test maturity and responsibility.However, this is not a straight path, but a path with ups and downs and many curves, and getting there isn’t easy. It requires change, in organisational processes but, more difficult, also in the way people work, their behavior and their attitude.
In my practice, I follow the principles of the Basic Change Method (from Dutch management guru Ben Tiggelaar). BCM is a combination of the most effective insights from cognitive and behavioral science and focuses on making people change their common behavior by management of both behavior intentions and change situations. Usually change management is mainly focused on end results. But the underestimated factor between change plans and desired results is behavior.
Issues that will be discussed are:
• using the TEC as a lever for test improvement
• envisioning the roadmap
• formulating improvement actions
• (management) commitment
• organising the improvement (team)
• planning the change
• implementing the improvements
• changing behavior
• measuring results.
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Henrik Andersson by Exploratory Testing Champions. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Agility and Process Maturity, Of Course They Mix! by Gitte Ottosen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Otto Vinter - Analysing Your Defect Data for Improvement PotentialTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Analysing Your Defect Data for Improvement Potential by Otto Vinter. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Klaus Olsen - Agile Test Management Using ScrumTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Agile Test Management Using Scrum by Klaus Olsen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Mats Grindal - Risk-Based Testing - Details of Our Success TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Risk-Based Testing - Details of Our Success by Mats Grindal. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Establishing Testing Knowledge and Experience Sharing at Siemens by Peter Zimmerer. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Thomas Axen - Lean Kaizen Applied To Software Testing - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Lean Kaizen Applied To Software Testing by Thomas Axen . See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Derk jan de Grood - ET, Best of Both WorldsTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on ET, Best of Both Worlds by Derk jan de Grood. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Geoff Thompson - Why Do We Bother With Test StrategiesTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Why Do We Bother With Test Strategies by Geoff Thompson. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Darius Silingas - From Model Driven Testing to Test Driven ModellingTEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on From Model Driven Testing to Test Driven Modelling by Darius Silingas. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Jelle Calsbeek - Stay Agile with Model Based Testing revisedTEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Evolution of New Feature Verification in 3G Networks by Michael Monaghan. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
'Houston We Have A Problem' by Rien van Vugt & Maurice SiteurTEST Huddle
Prevent the surprise, become a pro-active test manager. Too often projects suddenly seem to spin out of control. Challenges and risks keep stacking up and the defect count grows exponentially. At the same time, management can put pressure on you, asking when testing will be completed.
A surprise? Not really, defects only paint half the picture. The test effort, after all, is primarily determined by the number of tests that need to be completed. For an on the spot status of testing and accurate view on the quality and risks of the entire project we need to organize the test process to provide flexible, up-to-date metrics and trends on a daily basis. E.g. we need a view on baseline vs. actuals and ETC’s on test cases. Advanced metrics will provide answers on what needs to be done tomorrow to stay on track, the location and root cause of issues and who is required to take action. Also the test effort remaining for an acceptable product (or a specific risk level) can be estimated fairly accurately.
In addition early involvement and preparation in the development life cycle, performing test intakes rather than reviews, will help you bridge the gap between different development teams and allows you to verify consistency between business requirements, the integration model, functional specifications and technical specifications. It facilitates knowledge transfer and provides you with the “story” behind the specifications. This will help prevent structural issues in an early stage and avoid blocking issues during test execution.
This presentation combines daily test metrics and trends with test process dynamics and shows you how to become a “pro-active” test manager. Even better you can apply it tomorrow and take your test process to a distinct higher maturity level.
'Architecture Testing: Wrongly Ignored!' by Peter ZimmererTEST Huddle
State-of-the-art testing approaches typically include different testing levels like reviews, unit testing, component testing, integration testing, system testing, and acceptance testing. There is also common sense that typically unit testing is done by developers (they are responsible to check the quality of their units at least to some extent) and system testing is done by professional independent testers. But, who is responsible to adequately test the architecture which is one of the key artifacts in developing and maintaining flexible, powerful, and sustainable products and systems? History has shown that too many project failures and troubles are caused by deficiencies in the architecture.Furthermore, what does the term architecture testing mean and why is this term seldom used?
To answer these questions, Peter describes what architecture testing is all about and explains a list of pragmatic practices and experiences to implement it successfully. He offers practical advice on the required tasks and activities as well as the needed involvement, contributions, and responsibilities of software architects in the area of testing – because a close cooperation between testers and architects is the key to drive and sustain a culture of prevention rather than detection across the lifecycle.
Finally, if we claim to be in pursuit of quality then adequate architecture testing is not only a lever for success but a necessity. And this results not only in better quality but also speeds up development by facilitating change and decreasing maintenance efforts.
Testing is necessary for software systems to ensure reliability, manage costs, and reduce risks. It is impossible to exhaustively test a system, so testing aims to detect defects and measure quality. Testing alone cannot improve quality but can identify issues to address. Different testing types exist for various stages, including unit, integration, system, and acceptance testing, and both black-box and white-box techniques are used. Rigorous planning, design, execution and tracking of test cases and results is needed. While testing shows defects, debugging is then needed to identify and address the root causes.
Michael Bolton - Two Futures of Software TestingTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Two Futures of Software Testing by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
John Brennen - Red Hot Testing in a Green WorldTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Red Hot Testing in a Green World by John Brennen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Vipul Kocher - Software Testing, A Framework Based ApproachTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Software Testing, A Framework Based Approach by Vipul Kocher. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Using Functional ,Test Automation to Prevent Defects from Escaping the Develo...TEST Huddle
This document discusses using functional test automation to prevent defects from escaping the development phase. It recommends automating acceptance tests during development to catch bugs early from the user perspective. The process involves preparing for automation by exploring and selecting test candidates, automating the tests as close to development as possible, and repeating the automation across areas, platforms and versions to prevent regression bugs. Continuous integration and handling test errors are also suggested to provide feedback and react to issues identified through automation. The overall goal is to shift testing left in the development cycle through early and frequent automation from a user perspective.
'Customer Testing & Quality In Outsourced Development - A Story From An Insur...TEST Huddle
RSA Scandinavia implemented a new test model to standardize testing across outsourced development projects. The model uses a risk-based approach and V-model framework. It defines requirements for test planning, design, execution, reporting, and responsibilities between RSA and suppliers. The implementation involved communicating the new model, providing training, and integrating it into project and contracting processes. Today, the model is used for all projects and is helping to streamline quality monitoring, reporting, and knowledge sharing across the organization and its suppliers.
'Continuous Quality Improvements – A Journey Through The Largest Scrum Projec...TEST Huddle
In this presentation you will learn about how the testing process and continuous quality improvements are aligned to the scrum process in a large software project. We hope that our hands -on experience will give you inspiration on how to tailor the test process in an agile environment. The project has been running for more than two years, with six successful releases to end users. We would like to share our experiences with managing test processes in a large scrum project – our do’s and don’ts, our success stories and also our lessons learned. The project is the largest scrum project in Norway to date.
The project scope is to implement system support for managing a new pension reform for all inhabitants in Norway that are members of the pension fund, and replacing existing system due to outdated technology. Approximately 750 000 project hours will be spent and between 100-180 people are involved in the project: thirteen scrum teams, plus two project management and acceptance testing teams, and one business expert team. Each scrum team contains all the knowledge and expertise needed for developing high quality software: Scrum master, business expert, technical architect, UX designer, developers, build/deploy responsible, and of course, dedicated test resources.
Each software delivery in this project contains five sprints. Each sprint is three weeks, followed by acceptance testing before the delivery is shipped. Test driven development is used in all levels of development, from unit tests all the way up to functional system testing. All test levels up to system integration testing is performed during the development sprint by the scrum teams. We tried to automate UI tests, but this was not successful. However, tests in all other levels are successfully automated, and after each delivery, a fully automated regression test suite is shipped with the code.
Rob Baarda - Are Real Test Metrics Predictive for the Future?TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Are Real Test Metrics Predictive for the Future? by Rob Baarda. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
'Growing to a Next Level Test Organisation' by Tim KoomenTEST Huddle
Many organisations start improving their testing by implementing some kind of line organisation for testing (test expertise center, test service center), hereafter called TEC. Although a good starting point for improvements, in practice the TEC is often not much more than a resource pool of testers, possibly supplying certain templates or giving advice to projects.
A next maturity level for a TEC is to grow to a test factory, responsible for delivering pre-agreed test results.From the experiences gathered mostly from a large railroad infrastructure organisation, this presentation shows the path to this next level of test maturity and responsibility.However, this is not a straight path, but a path with ups and downs and many curves, and getting there isn’t easy. It requires change, in organisational processes but, more difficult, also in the way people work, their behavior and their attitude.
In my practice, I follow the principles of the Basic Change Method (from Dutch management guru Ben Tiggelaar). BCM is a combination of the most effective insights from cognitive and behavioral science and focuses on making people change their common behavior by management of both behavior intentions and change situations. Usually change management is mainly focused on end results. But the underestimated factor between change plans and desired results is behavior.
Issues that will be discussed are:
• using the TEC as a lever for test improvement
• envisioning the roadmap
• formulating improvement actions
• (management) commitment
• organising the improvement (team)
• planning the change
• implementing the improvements
• changing behavior
• measuring results.
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Henrik Andersson by Exploratory Testing Champions. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Gitte Ottosen - Agility and Process Maturity, Of Course They Mix!TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Agility and Process Maturity, Of Course They Mix! by Gitte Ottosen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Otto Vinter - Analysing Your Defect Data for Improvement PotentialTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Analysing Your Defect Data for Improvement Potential by Otto Vinter. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Klaus Olsen - Agile Test Management Using ScrumTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Agile Test Management Using Scrum by Klaus Olsen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Mats Grindal - Risk-Based Testing - Details of Our Success TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Risk-Based Testing - Details of Our Success by Mats Grindal. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Peter Zimmerer - Establishing Testing Knowledge and Experience Sharing at Sie...TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Establishing Testing Knowledge and Experience Sharing at Siemens by Peter Zimmerer. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Thomas Axen - Lean Kaizen Applied To Software Testing - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Lean Kaizen Applied To Software Testing by Thomas Axen . See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Derk jan de Grood - ET, Best of Both WorldsTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on ET, Best of Both Worlds by Derk jan de Grood. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Geoff Thompson - Why Do We Bother With Test StrategiesTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Why Do We Bother With Test Strategies by Geoff Thompson. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Darius Silingas - From Model Driven Testing to Test Driven ModellingTEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on From Model Driven Testing to Test Driven Modelling by Darius Silingas. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Jelle Calsbeek - Stay Agile with Model Based Testing revisedTEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Evolution of New Feature Verification in 3G Networks by Michael Monaghan. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
'Houston We Have A Problem' by Rien van Vugt & Maurice SiteurTEST Huddle
Prevent the surprise, become a pro-active test manager. Too often projects suddenly seem to spin out of control. Challenges and risks keep stacking up and the defect count grows exponentially. At the same time, management can put pressure on you, asking when testing will be completed.
A surprise? Not really, defects only paint half the picture. The test effort, after all, is primarily determined by the number of tests that need to be completed. For an on the spot status of testing and accurate view on the quality and risks of the entire project we need to organize the test process to provide flexible, up-to-date metrics and trends on a daily basis. E.g. we need a view on baseline vs. actuals and ETC’s on test cases. Advanced metrics will provide answers on what needs to be done tomorrow to stay on track, the location and root cause of issues and who is required to take action. Also the test effort remaining for an acceptable product (or a specific risk level) can be estimated fairly accurately.
In addition early involvement and preparation in the development life cycle, performing test intakes rather than reviews, will help you bridge the gap between different development teams and allows you to verify consistency between business requirements, the integration model, functional specifications and technical specifications. It facilitates knowledge transfer and provides you with the “story” behind the specifications. This will help prevent structural issues in an early stage and avoid blocking issues during test execution.
This presentation combines daily test metrics and trends with test process dynamics and shows you how to become a “pro-active” test manager. Even better you can apply it tomorrow and take your test process to a distinct higher maturity level.
'Architecture Testing: Wrongly Ignored!' by Peter ZimmererTEST Huddle
State-of-the-art testing approaches typically include different testing levels like reviews, unit testing, component testing, integration testing, system testing, and acceptance testing. There is also common sense that typically unit testing is done by developers (they are responsible to check the quality of their units at least to some extent) and system testing is done by professional independent testers. But, who is responsible to adequately test the architecture which is one of the key artifacts in developing and maintaining flexible, powerful, and sustainable products and systems? History has shown that too many project failures and troubles are caused by deficiencies in the architecture.Furthermore, what does the term architecture testing mean and why is this term seldom used?
To answer these questions, Peter describes what architecture testing is all about and explains a list of pragmatic practices and experiences to implement it successfully. He offers practical advice on the required tasks and activities as well as the needed involvement, contributions, and responsibilities of software architects in the area of testing – because a close cooperation between testers and architects is the key to drive and sustain a culture of prevention rather than detection across the lifecycle.
Finally, if we claim to be in pursuit of quality then adequate architecture testing is not only a lever for success but a necessity. And this results not only in better quality but also speeds up development by facilitating change and decreasing maintenance efforts.
Testing is necessary for software systems to ensure reliability, manage costs, and reduce risks. It is impossible to exhaustively test a system, so testing aims to detect defects and measure quality. Testing alone cannot improve quality but can identify issues to address. Different testing types exist for various stages, including unit, integration, system, and acceptance testing, and both black-box and white-box techniques are used. Rigorous planning, design, execution and tracking of test cases and results is needed. While testing shows defects, debugging is then needed to identify and address the root causes.
Michael Bolton - Two Futures of Software TestingTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Two Futures of Software Testing by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
John Brennen - Red Hot Testing in a Green WorldTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Red Hot Testing in a Green World by John Brennen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Rikard Edgren - Testing is an Island - A Software Testing DystopiaTEST Huddle
This document summarizes trends in software testing that could diminish its effectiveness and enjoyment. It notes an increasing focus on verification over validation, precise measurement over subjective judgement, and short-term metrics over long-term quality. This narrowing scope risks making testers isolated and limiting their creativity, motivation and ability to consider the full context of a project. The document advocates a holistic and subjective approach that considers people and intangible factors, not just short-term quantifiable results. Subjectivity and considering the whole system, not just parts, are presented as useful for testing.
Ian Smith - Mobile Software Testing - Facing Future ChallengesTEST Huddle
This document discusses challenges in testing mobile software systems. It notes the increasing capabilities of mobile devices and complexity of mobile applications. Key challenges include high variability in cellular networks and devices, changing platform landscapes, and ensuring security of sensitive data on devices. The document recommends approaches like managing complexity through architectural partitioning, maximizing code reuse across platforms, and combining emulation with automated GUI testing. It provides an example case study of developing an automated mobile call generation system and discusses lessons learned.
Michael Bolton - Heuristics: Solving Problems RapidlyTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Heuristics: Solving Problems Rapidly by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Isabel Evans - Route Cards to the FutureTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Route Cards to the Future by Isabel Evans. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Michael Roar Borlund & Christian Carlsen - Real Exploratory Testing, Now With...TEST Huddle
Exploratory testing approaches like "hotspot" and "coffee break" were presented as ways to optimize time spent testing and find more defects when performing exploratory testing in a service-oriented architecture (SOA). The "hotspot" approach resulted in finding more defects on average but took more time per defect. The "coffee break" approach found fewer defects but in less time. Both approaches provided broader test coverage and additional knowledge of the system compared to traditional testing. The presentation concluded that using a customized mix of both exploratory testing methods can minimize wasted time and add value to a project.
Rik Teuben - Many Can Quarrel, Fewer Can Argue TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Many Can Quarrel, Fewer Can Argue by Rik Teuben. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Darshan Desai - Virtual Test Labs,The Next Frontier - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Virtual Test Labs,The Next Frontier by Darshan Desai.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Martin Koojj - Testers in the Board of DirectorsTEST Huddle
Martin Kooij discusses the evolution and future of software testing. He traces his experience in testing telecom equipment from the 1960s to present day. Testing has matured from a technical focus on defects to a more strategic, risk-based approach. Metrics now consider risks rather than just defects. Kooij believes that by 2018, testers will report directly to boards of directors on product risks translated into business risks. Testers will broaden their skills and take responsibility for cost-effective testing to estimate and mitigate risks. For testing to continue evolving, testers must be brave, independent, and politically skilled while focusing on business risks over personal agendas.
Ruud Teunissen - Personal Test Improvement - Dealing with the FutureTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Personal Test Improvement - Dealing with the Future by Ruud Teunissen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Andrew Goslin - TMMi, What is Not in the Text Book - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on TMMi, What is Not in the Text Book by Andrew Goslin. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
'Model Based Test Design' by Mattias ArmholtTEST Huddle
MBT (Model Based Testing) has been used within my department in Ericsson since 2007. As an MBT tool we have been using Conformiq Modeler, which is a commercially available tool. This has been a great success, and is now our main way of working when verifying functional requirements.
Until now, MBT has neither within Ericsson nor outside, only been used very rarely for verification of non-functional requirements, such as performance testing, load testing, stability and robustness tests and characteristics measurements.
This presentation covers the work of two Master Students, who in 2010 performed a study of the possibilities to use MBT for verifying non-functional requirements. One of the results of this study was a new method, inspired by MPDE (Model Driven Performance Engineering), where non-functional requirements can be covered by test models describing the functional behavior. Test Cases can then be generated from these models with an MBT tool.
The proposed method provides different possibilities to handle the non-functional requirements. The requirements can, for example, be introduced with new dedicated states in the behavioral model, or be introduced by extending the existing state model. Another possibility is to implement the non-functional requirements in the test harness, and by that keeping the model simple. The most realistic scenario, however, is a combination of all the above. The grouping and allocation of both functional and non-functional requirements should be considered already in the early test analysis phase.
The new method has been tried out and evaluated. It has been proved useful and fully applicable, and there are clear indications that it is beneficial, and that project lead time can be reduced by using it. We have therefore now started to apply this method in our new development projects.
The presentation includes examples of real cases where MBT has been used for verifying non-functional requirements.
Derk-Jan de Grood - 9 Causes of losing valuable testing time - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on 9 Causes of losing valuable testing time by Derk-Jan de Grood. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Herman- Pieter Nijhof - Where Do Old Testers Go?TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Where Do Old Testers Go? by Herman- Pieter Nijhof. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
EuroSTAR Software Testing Conference 2009 presentation on The Power of Risk by Erik Beolen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Tim Koomen - Testing Package Solutions: Business as usual? - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Testing Package Solutions: Business as usual? by Tim Koomen. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Johan Jonasson - Introducing Exploratory Testing to Save the ProjectTEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Introducing Exploratory Testing to Save the Project by Johan Jonasson . See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Martin Gijsen - Effective Test Automation a la Carte TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Effective Test Automation a la Carte by Martin Gijsen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Making Model-Driven Verification Practical and Scalable: Experiences and Less...Lionel Briand
The document discusses experiences and lessons learned from making model-driven verification practical and scalable. It describes several projects collaborating with industry partners to develop model-based solutions for verification. Key challenges addressed include achieving applicability for engineers, scalability to large systems, and developing solutions informed by real-world problems. Lessons learned emphasize the importance of collaborative applied research, defining problems in context, and validating solutions realistically.
This document provides an overview of software defect prediction approaches from the 1970s to the present. It discusses early approaches using simple metrics like lines of code and complexity metrics. It then covers the development of prediction models using machine learning techniques like regression and classification. More recent topics discussed include just-in-time prediction models, practical applications in industry, using historical metrics from software repositories, addressing noise in data, and the feasibility of cross-project prediction. The document outlines challenges and opportunities for future work in the field of software defect prediction.
Survey on Software Defect Prediction (PhD Qualifying Examination Presentation)lifove
This document provides an outline and overview of approaches to software defect prediction. It discusses early approaches using lines of code and complexity metrics from the 1970s-1980s and the development of prediction models using regression and classification in the 1990s-2000s. More recent focus areas discussed include just-in-time prediction models, practical applications of prediction, using history metrics from software repositories, and assessing cross-project prediction feasibility. The document aims to survey the field of software defect prediction.
This document provides an outline and overview of approaches to software defect prediction. It discusses early approaches using simple metrics like lines of code in the 1970s and complexity metrics/fitting models in the 1980s. Prediction models using regression and classification emerged in the 1990s. Just-in-time prediction models and practical applications in industry are discussed for the 2000s. The use of history metrics from software repositories and challenges of cross-project prediction are also summarized.
Software development life cycle (SDLC) ModelsAOmaAli
The document discusses various software development life cycle (SDLC) models. It describes the waterfall model process with distinct phases of requirements, design, implementation, testing and maintenance. It also covers the V-model which incorporates testing at each phase. Other models discussed include prototyping, iterative/incremental and when each may be used based on project characteristics and requirements stability.
The document discusses various software project life cycle models and cost estimation techniques. It begins by describing agile methods like Scrum and Extreme Programming that emphasize iterative development, communication, and customer involvement. It then covers traditional models like waterfall and incremental development. Key estimation techniques discussed include function points, COCOMO, and analogy-based estimation. The document provides details on calculating sizes and estimating effort for different models.
The document discusses several prescriptive software development models:
1. The waterfall model is a linear sequential model and was one of the earliest prescriptive models proposed.
2. Variations of the waterfall model include the V-model and incremental model, which allow for some iteration and incremental delivery of features.
3. Evolutionary models like prototyping and the spiral model combine iterative development with controlled aspects of waterfall, producing prototypes and incremental releases to manage risk.
This document discusses search-based testing and its applications in software testing. It outlines some key strengths of search-based software testing (SBST) such as being scalable, parallelizable, versatile, and flexible. It also discusses some limitations of search-based approaches for problems that require formal verification to establish properties for all possible usages. The document compares classical optimization approaches, which build solutions incrementally, to stochastic optimization approaches used in SBST, which sample solutions in a randomized way. It notes that while testing can find bugs, it cannot prove their absence. Finally, it discusses how SBST can be combined with other techniques like constraint solving and machine learning.
This document provides an overview of software testing. It begins by introducing the topic and noting the large economic costs of software failures. The document then discusses what software is, why we test software, and defines what software testing is. It outlines the importance of software testing and some basic concepts like the differences between errors, faults and failures. The document also discusses test design methodologies, levels of testing from unit to system, and common challenges in software testing. The overall purpose is to provide an introduction to software testing concepts and practices.
SE_Unit 2.pdf it is a process model of it studentRAVALCHIRAG1
The document discusses various process models for software engineering including waterfall, incremental, RAD, evolutionary (prototyping and spiral), concurrent, component-based, aspect-oriented, and reuse-oriented models. It also covers project metrics, software measurement approaches including size-oriented metrics like lines of code and function-oriented metrics like function points. Key aspects of each model are defined along with their applicability and limitations.
The document compares and contrasts several software engineering process models:
The Waterfall model is a linear sequential model where each phase must be completed before the next begins. It is easy to manage but difficult to change requirements later. Evolutionary models like incremental and spiral models involve user feedback in iterative development cycles to refine requirements. Rapid prototyping creates samples to assess functionality and refine designs based on user feedback. The Fountain model is similar to Waterfall but allows revisiting previous phases. Formal transformation uses mathematics to reduce errors through iterative transformations. The Reuse-oriented approach develops software through existing code and processes to reduce costs and time.
The document discusses several software process models:
- The Linear Sequential (Waterfall) Model is a simple, systematic approach where each phase must be completed before moving to the next. It is best for small, well-defined projects.
- The Incremental Model applies the Linear Sequential Model iteratively to increments, delivering working software in stages. This allows for early delivery and flexibility.
- The Prototyping Model involves building prototypes to refine requirements through client feedback in iterations. This helps establish clear objectives.
- Rapid Application Development (RAD) is a fast version of the Linear Sequential Model using a component-based approach to accelerate delivery of fully functional projects.
The document discusses several software process models, including:
- The waterfall model, which progresses through requirements, design, implementation, testing, and maintenance in a linear fashion. It is easy to understand but inflexible.
- The prototyping model, which builds prototypes to help refine requirements rather than freezing them early. This gets feedback from customers but prototypes may be mistaken for finished products.
- The spiral model, which is iterative and incremental, with each pass through the loop addressing process risks and allowing revisions of previous decisions.
Software Lifecycle Models / Software Development Models
Types of Software development models
Waterfall Model
Features of Waterfall Model
Phase of Waterfall Model
Prototype Model
Advantages of Prototype Model
Disadvantages of Prototype model
V Model
Advantages of V-model
Disadvantages of V-model
When to use the V-model
Incremental Model
ITERATIVE AND INCREMENTAL DEVELOPMENT
INCREMENTAL MODEL LIFE CYCLE
When to use the Incremental model
Rapid Application Development RAD Model
phases in the rapid application development (RAD) model
Advantages of the RAD model
Disadvantages of RAD model
When to use RAD model
Agile Model
Advantages of Agile model
Disadvantages of Agile model
When to use Agile model
The document discusses model-based testing (MBT) that was implemented at SpareBank 1 (SB1) to test their Master Data Management (MDM) system. It holds information on 7 million customer records and receives 12,000 daily updates from public registers. MBT uses a model of rules and requirements to automatically generate test cases from different parameters and coverage criteria. This allows generating targeted test cases for particular changes to reduce maintenance costs compared to manually maintaining test suites. Lessons learned include the importance of a complete and correct model, integrating the MBT tool with test execution tools, and improving usability of MBT tools for testers. The presenter's company aims to advance from manual to automated to adaptive testing using
This document provides an overview of different software process models including the waterfall model, V-model, evolutionary development, component-based development, and incremental delivery. It describes the key phases and activities in each model. The V-model is explained in detail with its distinct development and validation phases like requirements, design, coding, unit testing, integration testing, system testing, and acceptance testing. Pros and cons of each model are also highlighted along with guidance on when each is generally most applicable.
Software Engineering Research: Leading a Double-Agent Life.Lionel Briand
The document discusses testing of closed-loop controllers in automotive systems. It notes the increasing complexity of automotive software and challenges in model-in-the-loop (MIL), software-in-the-loop (SIL), and hardware-in-the-loop (HIL) testing. It presents an approach to generate test cases for continuous controllers at the MIL level by representing requirements as objective functions and using a search-based approach to find worst-case scenarios. Experimental results found significantly worse scenarios than their industry partner, allowing them to generate better stress tests for HIL. The approach addresses a problem largely ignored in MIL testing of continuous controllers.
This document discusses model-based testing (MBT), including what it is, the MBT process, tools that can be used, and benefits/limitations. MBT involves creating a model of the system under test and using that model to automatically generate test cases. The MBT process includes modelling the system, selecting test requirements, generating abstract test cases from the model, concretizing the tests into executable scripts, and executing the tests against the system. MBT can find faults, reduce costs and time, and improve test quality compared to manual testing. However, it requires skilled modelers and some testing experience to apply effectively.
Similar to Bart Knaack - The Truth About Model-Based Quality Improvements (20)
Why We Need Diversity in Testing- AccentureTEST Huddle
In this webinar Rasa (Testing capability lead for Denmark) and Matthias (EALA Testing capability lead) will share some of their own experiences why diversity matters, give insights into how Accenture as a global firm is promoting diversity and how we are in the process of changing our attitudes and processes to make all of this sustainable
Keys to continuous testing for faster delivery euro star webinar TEST Huddle
Your business needs to deliver faster. To accommodate, Development needs to introduce fewer changes but in a much more frequent cadence. This creates a challenge for test teams to keep up with the rapid pace of change without compromising on quality. Automation is paramount to the success or failure of Continuous Delivery, and Continuous Testing enables early and frequent quality feedback throughout the CI/CD pipeline.
In this webinar, Eran & Ayal will explore how to implement Continuous Testing to ensure high quality releases in a Continuous Delivery environment; including what to test and when to automate new functionality in order to optimize your efforts.
Why you Shouldnt Automated But You Will Anyway TEST Huddle
The document discusses automation in software testing. It begins by outlining common claims made about the benefits of automation, such as saving time and improving quality, but argues that these claims often don't hold true. Automation does not inherently save time, guarantee quality, or reduce resources needed. It also does not always save money when development, maintenance, and infrastructure costs are considered. The document provides a formula for determining when automation is worthwhile based on how many times a test case would need to be rerun manually. It concludes by acknowledging that, despite these drawbacks, organizations will still automate testing because it is exciting, managers demand it, and it benefits careers.
In this webinar Carsten will explore the role of the tester in a Scrum team. He will examine where the tester play an important role in Scrum and how you can contribute to a teams performance.
Leveraging Visual Testing with Your Functional TestsTEST Huddle
Designing and implementing (or selecting) the right automation strategy, for functional testing, with visual testing, can help your project with greater test coverage while improving test scalability
Big Data: The Magic to Attain New HeightsTEST Huddle
This document discusses how big data and data science can be used to attain new heights, likening it to magic. It provides an overview of Ken Johnston's background and experiences in data science. It then discusses six keys to a "big" magic show with big data: trying multiple times, addressing issues with over-counting, experimentation techniques like A/B testing, infrastructure for big data, tools and skills, and security, privacy and fraud protection. The document emphasizes the importance of an assistant to help the data scientist or data engineer with various tasks.
This talk suggests how we might make sense of the tools landscape of the near future, where the pressure to modernise processes and automate is greatest, and what a new test process supported by tools might look like.
Takeaways:
- We need to take machine learning in testing seriously, but it won’t be taking our jobs just yet
- We don’t need more test automation tools; today we need tools that capture tester knowledge
- Tools that that learn and think can’t work for testers until we solve the knowledge capture challenge.
View On-Demand Webinar: https://youtu.be/EzyUdJFuzlE
The document discusses Test Driven Development (TDD) and Test Driven Design. It uses the analogy of building a lightsaber and later a Death Star to illustrate the TDD process and benefits. Some benefits mentioned are better test coverage, less debugging, and better design. The document provides tips for practicing TDD including planning ahead, defining boundaries, taking small steps to pass each test, and maintaining discipline. It emphasizes trying TDD in a team and considering Behavior Driven Development (BDD) as well.
Scaling Agile with LeSS (Large Scale Scrum)TEST Huddle
In this webinar, Elad will cover the principles that the #LeSS framework has to offer in order to enable bug organisations to become agile.
View webinar recording - https://huddle.eurostarsoftwaretesting.com/resource/agile-testing/scaling-agile-less-large-scale-scrum/
Creating Agile Test Strategies for Larger EnterprisesTEST Huddle
Having difficulty creating an agile test strategy for your company? Let Testing Excellence Award winner, Derk-Jan de Grood, show you how it’s done
View webinar recording here - http://huddle.eurostarsoftwaretesting.com/resource/agile-testing/creating-agile-test-strategies-larger-enterprises/
3 key takeaways
- Do you know the meaning of your organisation, system, product?
- Can you deliver the important risks right away?
- How can you communicate about the (process and product) risks your dealing with?
View Webinar recording: https://huddle.eurostarsoftwaretesting.com/resource/test-management/is-there-a-risk/
Are Your Tests Well-Travelled? Thoughts About Test CoverageTEST Huddle
This document summarizes a presentation on test coverage given by Dorothy Graham. It uses an analogy of travel to different locations to explain what test coverage means and some caveats. Coverage refers to the relationship between tests and the parts of a system being tested, but achieving 100% coverage does not mean everything is tested. There are four caveats discussed: coverage only measures one aspect of testing, a single test can achieve coverage, coverage does not indicate quality, and it only applies to the existing system not missing pieces. The key recommendation is to ask "coverage of what?" when the term is used rather than assuming more coverage is always better.
Growing a Company Test Community: Roles and Paths for TestersTEST Huddle
Over the past three years, our company’s test team has grown from three lonesome testers to a community of nine – with more planned. Since we don’t see testers as “click monkeys”, but as valuable and integrated project members who bring a specific skill set to the table, it’s important for us to choose testers well and to train them in various areas so that they can contribute, grow and see their own career path within testing.
To structure to our internal tester training program, we have been developing role descriptions, education paths and career options for our testers, which I’d like to share with you in this webinar.
View webinar - https://huddle.eurostarsoftwaretesting.com/resource/webinar/growing-company-test-community-roles-paths-testers/
It’s the same argument again and again. One side says “team members should all be able to do everything, and the programmers should do their testing and all testers should be writing code”. The other side says “No, that can’t possibly work – programmers don’t know how to test, they don’t have the right mindset”. And on and on it goes.
http://huddle.eurostarsoftwaretesting.com/resource/webinar/need-testers-agile-teams/
In this webinar, Dave Haeffner (Elemental Selenium, USA) discusses how to:
- Build an integrated feedback loop to automate test runs and find issues fast
- Setup your own infrastructure or connect to a cloud provider
-Dramatically improve test times with parallelization
https://huddle.eurostarsoftwaretesting.com/resource/webinar/use-selenium-successfully/
Testers & Teams on the Agile Fluency™ Journey TEST Huddle
The document discusses the Agile Fluency model, which aims to help teams and testers improve their agile skills and practices over time. It describes a pathway with increasing levels of fluency that provide more benefits, including delivering value, optimizing value, and innovating. Reaching higher levels requires investments in training, coaching, and changing team structures and roles. The model can help organizations determine what level of fluency they need and what investments are required for testing teams to operate at that level.
Practical Test Strategy Using HeuristicsTEST Huddle
Key Takeaways
- See what makes a good test strategy
- Learn how to make a thorough test strategy
- Identify what is the ‘Heuristic Test Strategy Model’ is
- Develop a solid test strategy that fits fast
- Discover how diversification can help you to create a test strategy
Key Takeaways:
- A diagramming method that helps discuss roles
- A one page analysis heuristic for roles
- Why roles matter on projects
https://huddle.eurostarsoftwaretesting.com/resource/people-skills/thinking-through-your-role/
Key Takeaways:
- What will this release contain
- What impact will it have on your test runs
- How can you preserve your existing investment in tests using the Selenium WebDriver APIs, and your even older RC tests
- Looking forward, when will the W3C spec be complete
- What can we expect from Selenium 4
https://huddle.eurostarsoftwaretesting.com/
🏎️Tech Transformation: DevOps Insights from the Experts 👩💻campbellclarkson
Connect with fellow Trailblazers, learn from industry experts Glenda Thomson (Salesforce, Principal Technical Architect) and Will Dinn (Judo Bank, Salesforce Development Lead), and discover how to harness DevOps tools with Salesforce.
Penify - Let AI do the Documentation, you write the Code.KrishnaveniMohan1
Penify automates the software documentation process for Git repositories. Every time a code modification is merged into "main", Penify uses a Large Language Model to generate documentation for the updated code. This automation covers multiple documentation layers, including InCode Documentation, API Documentation, Architectural Documentation, and PR documentation, each designed to improve different aspects of the development process. By taking over the entire documentation process, Penify tackles the common problem of documentation becoming outdated as the code evolves.
https://www.penify.dev/
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSISTier1 app
Are you ready to unlock the secrets hidden within Java thread dumps? Join us for a hands-on session where we'll delve into effective troubleshooting patterns to swiftly identify the root causes of production problems. Discover the right tools, techniques, and best practices while exploring *real-world case studies of major outages* in Fortune 500 enterprises. Engage in interactive lab exercises where you'll have the opportunity to troubleshoot thread dumps and uncover performance issues firsthand. Join us and become a master of Java thread dump analysis!
What is Continuous Testing in DevOps - A Definitive Guide.pdfkalichargn70th171
Once an overlooked aspect, continuous testing has become indispensable for enterprises striving to accelerate application delivery and reduce business impacts. According to a Statista report, 31.3% of global enterprises have embraced continuous integration and deployment within their DevOps, signaling a pervasive trend toward hastening release cycles.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
The Power of Visual Regression Testing_ Why It Is Critical for Enterprise App...kalichargn70th171
Visual testing plays a vital role in ensuring that software products meet the aesthetic requirements specified by clients in functional and non-functional specifications. In today's highly competitive digital landscape, users expect a seamless and visually appealing online experience. Visual testing, also known as automated UI testing or visual regression testing, verifies the accuracy of the visual elements that users interact with.
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
Voxxed Days Trieste 2024 - Unleashing the Power of Vector Search and Semantic...Luigi Fugaro
Vector databases are redefining data handling, enabling semantic searches across text, images, and audio encoded as vectors.
Redis OM for Java simplifies this innovative approach, making it accessible even for those new to vector data.
This presentation explores the cutting-edge features of vector search and semantic caching in Java, highlighting the Redis OM library through a demonstration application.
Redis OM has evolved to embrace the transformative world of vector database technology, now supporting Redis vector search and seamless integration with OpenAI, Hugging Face, LangChain, and LlamaIndex. This talk highlights the latest advancements in Redis OM, focusing on how it simplifies the complex process of vector indexing, data modeling, and querying for AI-powered applications. We will explore the new capabilities of Redis OM, including intuitive vector search interfaces and semantic caching, which reduce the overhead of large language model (LLM) calls.
2. Agenda
• Introduction
• The future of testing
• What is a model?
• Different ways of Model Based Quality Improvement
• Conclusions
• What next
3. Bart Knaack
• Bart Knaack, Senior Test Advisor, Logica, The Netherlands
• 15 years experience in IT, of which 12 in testing
• Father of 2 kids (age 6 and 8)
Bart.knaack@logica.com
4. The future of testing
• Crystal ball called Academia
• The past as the mirror of the future
• Trends and threats
But:
• Testing is not quality improvement
• Testing is not LEAN
5. What is a model?
A model is an abstraction or conceptual object used in the creation
of a predictive formula (wikipedia)
A model in science is a physical, mathematical, or logical
representation of a system of entities, phenomena, or processes.
Abstract representation of the reality focussing on a limited set of
aspects
Programs are models too!!
6. Model for this presentation
Behavioral models: Models that somehow define the
(intended) behavior of a system
•UML models
•Finite State Machines
•Algebraic models
•Graphical model
•Textual model
7. Model Based Quality Improvement
No. 725 March 2015 Testing a Virtual Juggling Program
8. Different ways of Model Based Quality Improvement?
• Model Based testing
–Online
–Offline
• Model Metrics and Code Conformance
• Model checking the models
• Applying Model checking techniques.
9. Model Based testing (Online)
• Definition: Directly coupling the test generation model to
the SUT and directly executing the tests.
• Project: IRIS DOMAIN: Insurances
SUT: Actuarial calculation engine.
Torx modelling language for modelling the formulae.
Result: 5 Additional errors were found of which 3 were
significant. Development and test-time for upcoming
changes and fixes was reduced by 50%.
10. Model Based testing (Online)
• Advantages:
–Quick (re)testing of systems
• Disadvantages:
–Model SUT adaptor needs to be set up.
–Model errors need to be taken into account
–Not all systems and tests are suitable for this approach.
11. Model Based testing (Offline)
• Definition: Tests are generated by a Testgenerator and
executed manually on the SUT.
• Project: CCBS Domain: Telecom.
SUT: 5ESS Telephony switch, new feature
Result: 10 features Modeled.1200 Tests designed of which
75% specific on feature interactions. 20% tests failed due to
modelling errors.
12. Model Based testing (Offline)
•Advantages:
–Quick adaption of testcases
–Models can be used for rapid prototyping.
–Quick reponse on Changes
•Disadvantages:
–Most methods do not support the selection of testcases.
–Testexecution still manual
13. Model Metrics and code conformance
• Definition: Using Metrics to determine when models are
good enough.
• Project:Siu Wai Tang Domain: Banking and Finance
SUT: Electronic Payments System.
Result: Improved Insight in elevated problem areas;
redefinition of 3 models based on this insight.
14. Model Metrics
•Advantages:
–Early stage warnings on model quality.
–Multidimensional view on models.
•Disadvantages:
–No hard figures to adhere to.
–Statistical co-relation instead of direct cause-and-effect.
15. Model checking the models
• Definition: Requirements checking by checking the models
that form the requirements.
• Project: Bos project Domain: Industry
SUT: ‘Waterkering’ (lock) protocols.
Result: Error free software (after 20 years of operation).
16. What is Model Checking?
•Statespace search
•Validation of a
negative scenario.
•Structural traversal.
17. Model checking the models
•Advantages:
–Finding CONCEPTUAL errors at an early stage.
–Finding DIFFERENT types of errors
•Disadvantages:
–Model needs to be implemented in a formal manner
–Model checker needs to be available for the used paradigm.
18. Applying Model checking techniques.
• Definition: CODE is used model. Model checking techniques
used for faultfinding
• Project: FeaVer Domain: Telecom
SUT: transaction based system (V5 interface)
5000 lines of code 1000 installments over 10 years.
Result: 35 bugs found (25 race conditions)
19. Applying Model checking techniques.
•Advantages:
–Quick (re)test functionality
–Model errors are REAL errors
–Errors are found that are next to impossible to find using ‘traditional’
methods.
•Disadvantages:
–Not all languages are supported by modelcheckers.
–Stubs and drivers need to be implemented to FEED the system.
20. Conclusions
• Modelling by itself can be quality improving.
• A lot can be gained from modelling and model improvement
initiatives.
• Model Based Testing can help in t ackling the problems in
complex systems.
BUT:
• New skills
• Academic image
• No standards
21. What next ?
• Integrated Toolsupport
• Simplify
• Educate
• Not included: Model Based development.