In the last 20 years, software has grown tremendously in terms of size and functionalities. Such growth had a strong influence as far as testing, and quality assurances are concerned. While traditional testing techniques are useful for fault detection and prevention, they may not be sufficient to ensure software conformance. The systems to be tested continue to change, starting from new applications for smart devices, configurable software systems, cloud services, internet-scale applications, IoT, systems of systems, machine learning, and data-driven systems. To this end, the testing tools and technologies are evolving very fast and there is a need for new testing strategies. This lecture gives an overview journey on what research activities we have done in this area so far and how we see testing in the context of software engineering. The lecture will show the recent testing approaches and the evolving technologies and trends on where we are going in the near future.
EXTENT-2017: Gap Testing: Combining Diverse Testing Strategies for Fun and Pr...Iosif Itkin
EXTENT-2017: Software Testing & Trading Technology Trends Conference
29 June, 2017, 10 Paternoster Square, London
Gap Testing: Combining Diverse Testing Strategies for Fun and Profit
Ben Livshits, Professor, Imperial College London
Would like to know more?
Visit our website: extentconf.com
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
#extentconf
#exactpro
The landscape for software testing has never been so broad. Applications today interact with other applications through APIs. And in return they leverage legacy systems, while they grow in complexity from one day to the next in a nonlinear fashion. So what does that mean for analysts, developers, and testers?
The 2016-17 World Quality Report suggests that AI will help. “We believe that the most important solution to overcome increasing QA and Testing Challenges will be the emerging introduction of machine-based intelligence,” the report states.
We have witnessed the mobile and computer revolution — now similarly — artificial intelligence (AI) is revealing its potential; not only by the way we live, but also within the majority of industries,. And software testing is no exception.
Facebook and Google aren’t the only companies applying AI techniques. In this session, we will explore how software testers can leverage AI and how tools may need to evolve. For instance, Helix ALM accelerates the development-to-release process, catches bugs earlier, and supports the transition to new development techniques.
In this webinar, we will also discuss three key elements that will significantly change software development with the evolution of “Artificial Intelligence”.
High Accuracy Model at what costs - Data Curry Data Curry
Just how much accuracy do you need in a predictive model? The answer is, interestingly, not as much as you can get. Essentially, any model is at some level a curve fit. So, if you are OK to complicate, you can fit a bit better. While a complex model
is actually going to be more accurate than the simpler models, it is difficult to understand.
Machine learning, Machine learning training bootcampTonex
This is a course for Data Scientists learning about complex theory, algorithms and coding libraries in a practical way with custom examples.
Machine Learning training Bootcamp is a 3-day technical training course that covers the fundamentals of machine learning.
How machine learning helps ?
Machine learning helps to automate the data analysis process by enabling computers, machines and IoT to learn and adapt through experience applied to specific tasks without explicit programming.
Machine learning has huge potential to help wrangle and draw insights from scientific research. But it has also been successfully deployed in everyday situations, including:
Predicting traffic
Gleaning information from personal assistants
Monitoring video surveillance
Filtering email spam
Online customer support via chat bots
Online fraud detection
Personal product recommendations based on your buying/browsing habits
Course Agenda and Topics
The Basics of Machine Learning
Machine Learning Techniques, Tools and Algorithms
Data and Data Science
Applied Artificial Intelligence (AI) and Machine Learning
Popular Machine Learning Methods
Large Scale Machine Learning
Overview of Algorithms
Hands-on Activities
Learn more.
Machine learning, Machine learning training bootcamp
https://www.tonex.com/training-courses/machine-learning-training-bootcamp/
APPLIED DATA SCIENCE: HET ONTWIKKELEN VAN SLIMME ICT-PRODUCTEN DIE LEREN VAN ...webwinkelvakdag
Als docent bij Fontys ICT zie ik meer en meer studenten die als bijvoorbeeld afstudeeropdracht een stuk software moeten opleveren waarin machine learning moet worden toegepast. Ook hebben we een specialisatierichting Applied Data Science opgezet waarin we studenten leren hoe ze machine learning toepassen in ICT-producten. Daarmee hebben we een hoop kennis verzameld over de best practices bij het ontwikkelen van machine learning applicaties. Daarnaast hebben we een aantal interessante cases om te laten zien wat de toegevoegde waarde van machine learning in applicaties kan zijn. Sinds februari 2019 heb ik die activiteiten voortgezet in een postdoc-onderzoek getiteld: "Applied data science: Ontwikkeling van lCT-producten die leren van data". In dit onderzoek verzamel ik best practices uit het onderwijs, de literatuur en het werkveld om te komen tot een "toolbox" voor software engineers die machine learning applicaties willen bouwen. In deze lezing zal ik ingaan op waarom het ontwikkelen van machine learning applicaties anders is dan traditionele software. Welke methoden, technieken en tools heb je ervoor nodig? Welk proces moet je volgen? We bespreken een aantal concrete projecten om een goed beeld te geven van waar je tegenaan kunt lopen. Na afloop van deze lezing heb je een aantal praktische handvatten om in je eigen softwareontwikkelpraktijk toe te passen.
Doug Carlson recommends Advitya Khanna for employment without reservation. Carlson worked with Khanna over two internships and was impressed with his technical skills, problem-solving abilities, and work ethic. Khanna quickly understood complex problems and produced elegant solutions. He also sought feedback to improve, growing his communication skills significantly. Most notably, Khanna's thoroughness improved not only his own work but also third-party code used by thousands of engineers through patches he submitted. Carlson believes any team would benefit from recruiting Khanna.
This document discusses feature engineering and machine learning approaches for predicting customer behavior. It begins with an overview of feature engineering, including how it is used for image recognition, text mining, and generating new variables from existing data. The document then discusses challenges with artificial intelligence and machine learning models, particularly around explainability. It concludes that for smaller datasets, feature engineering can improve predictive performance more than complex machine learning models, while large datasets are better suited to machine learning approaches. Testing on a small travel acquisition dataset confirmed that traditional models with feature engineering outperformed neural networks.
EXTENT-2017: Gap Testing: Combining Diverse Testing Strategies for Fun and Pr...Iosif Itkin
EXTENT-2017: Software Testing & Trading Technology Trends Conference
29 June, 2017, 10 Paternoster Square, London
Gap Testing: Combining Diverse Testing Strategies for Fun and Profit
Ben Livshits, Professor, Imperial College London
Would like to know more?
Visit our website: extentconf.com
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
#extentconf
#exactpro
The landscape for software testing has never been so broad. Applications today interact with other applications through APIs. And in return they leverage legacy systems, while they grow in complexity from one day to the next in a nonlinear fashion. So what does that mean for analysts, developers, and testers?
The 2016-17 World Quality Report suggests that AI will help. “We believe that the most important solution to overcome increasing QA and Testing Challenges will be the emerging introduction of machine-based intelligence,” the report states.
We have witnessed the mobile and computer revolution — now similarly — artificial intelligence (AI) is revealing its potential; not only by the way we live, but also within the majority of industries,. And software testing is no exception.
Facebook and Google aren’t the only companies applying AI techniques. In this session, we will explore how software testers can leverage AI and how tools may need to evolve. For instance, Helix ALM accelerates the development-to-release process, catches bugs earlier, and supports the transition to new development techniques.
In this webinar, we will also discuss three key elements that will significantly change software development with the evolution of “Artificial Intelligence”.
High Accuracy Model at what costs - Data Curry Data Curry
Just how much accuracy do you need in a predictive model? The answer is, interestingly, not as much as you can get. Essentially, any model is at some level a curve fit. So, if you are OK to complicate, you can fit a bit better. While a complex model
is actually going to be more accurate than the simpler models, it is difficult to understand.
Machine learning, Machine learning training bootcampTonex
This is a course for Data Scientists learning about complex theory, algorithms and coding libraries in a practical way with custom examples.
Machine Learning training Bootcamp is a 3-day technical training course that covers the fundamentals of machine learning.
How machine learning helps ?
Machine learning helps to automate the data analysis process by enabling computers, machines and IoT to learn and adapt through experience applied to specific tasks without explicit programming.
Machine learning has huge potential to help wrangle and draw insights from scientific research. But it has also been successfully deployed in everyday situations, including:
Predicting traffic
Gleaning information from personal assistants
Monitoring video surveillance
Filtering email spam
Online customer support via chat bots
Online fraud detection
Personal product recommendations based on your buying/browsing habits
Course Agenda and Topics
The Basics of Machine Learning
Machine Learning Techniques, Tools and Algorithms
Data and Data Science
Applied Artificial Intelligence (AI) and Machine Learning
Popular Machine Learning Methods
Large Scale Machine Learning
Overview of Algorithms
Hands-on Activities
Learn more.
Machine learning, Machine learning training bootcamp
https://www.tonex.com/training-courses/machine-learning-training-bootcamp/
APPLIED DATA SCIENCE: HET ONTWIKKELEN VAN SLIMME ICT-PRODUCTEN DIE LEREN VAN ...webwinkelvakdag
Als docent bij Fontys ICT zie ik meer en meer studenten die als bijvoorbeeld afstudeeropdracht een stuk software moeten opleveren waarin machine learning moet worden toegepast. Ook hebben we een specialisatierichting Applied Data Science opgezet waarin we studenten leren hoe ze machine learning toepassen in ICT-producten. Daarmee hebben we een hoop kennis verzameld over de best practices bij het ontwikkelen van machine learning applicaties. Daarnaast hebben we een aantal interessante cases om te laten zien wat de toegevoegde waarde van machine learning in applicaties kan zijn. Sinds februari 2019 heb ik die activiteiten voortgezet in een postdoc-onderzoek getiteld: "Applied data science: Ontwikkeling van lCT-producten die leren van data". In dit onderzoek verzamel ik best practices uit het onderwijs, de literatuur en het werkveld om te komen tot een "toolbox" voor software engineers die machine learning applicaties willen bouwen. In deze lezing zal ik ingaan op waarom het ontwikkelen van machine learning applicaties anders is dan traditionele software. Welke methoden, technieken en tools heb je ervoor nodig? Welk proces moet je volgen? We bespreken een aantal concrete projecten om een goed beeld te geven van waar je tegenaan kunt lopen. Na afloop van deze lezing heb je een aantal praktische handvatten om in je eigen softwareontwikkelpraktijk toe te passen.
Doug Carlson recommends Advitya Khanna for employment without reservation. Carlson worked with Khanna over two internships and was impressed with his technical skills, problem-solving abilities, and work ethic. Khanna quickly understood complex problems and produced elegant solutions. He also sought feedback to improve, growing his communication skills significantly. Most notably, Khanna's thoroughness improved not only his own work but also third-party code used by thousands of engineers through patches he submitted. Carlson believes any team would benefit from recruiting Khanna.
This document discusses feature engineering and machine learning approaches for predicting customer behavior. It begins with an overview of feature engineering, including how it is used for image recognition, text mining, and generating new variables from existing data. The document then discusses challenges with artificial intelligence and machine learning models, particularly around explainability. It concludes that for smaller datasets, feature engineering can improve predictive performance more than complex machine learning models, while large datasets are better suited to machine learning approaches. Testing on a small travel acquisition dataset confirmed that traditional models with feature engineering outperformed neural networks.
- The document summarizes a presentation given by Andy Zaidman at the International Conference on Automation of Software Test (AST 2023) in Melbourne, Australia on May 16th, 2023.
- It discusses findings from studies on how developers engineer test cases and their testing behaviors in IDEs, including strategies like being guided by documentation or code.
- It also presents recommendations to improve developer testing through better tool support, clear adequacy criteria in education, and a focus on improving the user and developer experience of testing tools and processes.
This document provides an overview of software testing concepts. It discusses different types of testing like unit testing, functional testing, error path testing, boundary value testing, and equivalence partitioning. It also covers test strategies like golden path testing, black box testing, and white box testing. The purpose of a tester is explained as quantifying risk to make decisions that improve confidence and quality.
The document discusses various aspects of software testing such as test cases, test plans, test scenarios, testworthy criteria, testing types including functional, non-functional, manual and automated testing. It also covers topics like traceability matrix, test automation frameworks, fuzzing, mutation testing and references various standards and research papers related to software testing.
This is an introductory workshop for machine learning. Introduced machine learning tasks such as supervised learning, unsupervised learning and reinforcement learning.
The document discusses various software testing techniques. It covers the objectives of testing as finding errors and having a high probability of discovering undiscovered errors. It describes different types of testing like white-box testing, which tests internal logic and paths, and black-box testing, which tests external functionality. Specific techniques covered include basis path testing, equivalence partitioning, boundary value analysis, and graph-based testing methods. The importance of testability, traceability, simplicity, and understandability are emphasized.
utPLSQL offers a unit testing API for PL/SQL that is modeled on the xUnit approach. This is an old slide deck on utPLSQL so my apologies for any inconsistencies with the current utility. Note: while I created the original utPLSQL code base, I am not actively working on utPLSQL at this time. Check out github.com/utplsql for the code and project details.
This document discusses search-based testing and its applications in software testing. It outlines some key strengths of search-based software testing (SBST) such as being scalable, parallelizable, versatile, and flexible. It also discusses some limitations of search-based approaches for problems that require formal verification to establish properties for all possible usages. The document compares classical optimization approaches, which build solutions incrementally, to stochastic optimization approaches used in SBST, which sample solutions in a randomized way. It notes that while testing can find bugs, it cannot prove their absence. Finally, it discusses how SBST can be combined with other techniques like constraint solving and machine learning.
This document analyzes different model validation techniques (MVTs) used to estimate the performance of defect prediction models. It finds that out-of-sample bootstrap validation produces the least biased performance estimates while ordinary bootstrap validation produces the most stable estimates. Considering both bias and variance, techniques like ordinary bootstrap and out-of-sample bootstrap perform best by providing a balance of low bias and variance in their performance estimates.
The document discusses software testing objectives, principles, techniques and processes. It covers black-box and white-box testing, unit and integration testing, and challenges of object-oriented testing. Testing aims to find bugs but can never prove their absence. Exhaustive testing is impossible so testing must be planned and systematic. Frameworks like xUnit can help automate unit testing.
The document discusses various software testing strategies and techniques:
1. Testing is the process of finding errors in a program before delivering it to end users. It shows errors, tests requirements conformance, and is an indication of quality.
2. Testing begins with unit testing individual components, then progresses to integration testing of components working together, validation testing against requirements, and system testing in the full system context.
3. White-box testing aims to ensure all statements and conditions are executed at least once, while black-box testing treats the software as a "black box" without viewing internal logic or code.
It’s Time to Automate Your Exploratory TestingTechWell
This document summarizes a presentation about automating exploratory testing. The presentation discusses what exploratory testing is, the drivers for conducting exploratory tests, and different types of exploratory tests that can be automated, including tests involving data injection, navigation, and timings. It also provides a case study example of how a company automated exploratory tests by defining workflows and possible paths through the system being tested and automating tours of the modeled test paths.
Hands-on Experience Model based testing with spec explorer Rachid Kherrazi
This document discusses model based testing using Spec Explorer. It begins with an introduction to model based testing and its benefits over traditional testing. It then demonstrates modeling a simple "Say Hello" application in Spec Explorer, including generating test cases from the model and executing them. Key benefits of model based testing include avoiding integration issues and post-release defects by more thoroughly testing interactions between units.
Today, organisations strive to implement efficient CI/CD for their products. However, one aspect is slowing them down: test automation. It is often difficult to justify this investment, and building a convincing business case for automation is not always easy. A comprehensive business case is crucial not only to ensure funding for automated testing, but also to enable business support for a testing transformation. In this session, Eran Kinsbruner, Chief Evangelist and Author at Perfecto, will present how to build a compelling business case for automation and the criteria needed for a successful transformation to automated testing. He will also focus on the key metrics that need to be baked into the ROI calculator, and cost-saving examples for implementing test automation while considering AI and ML capabilities for test creation and analysis.
This document summarizes a seminar presentation on software testing. It discusses:
- The importance of testing in finding errors and making software more reliable
- How testing consumes the largest effort in software development
- The key concepts of testing including test cases, test suites, errors, and failures
- The different levels of testing like unit, integration, system, and acceptance testing
- Techniques for white box, black box, and grey box testing based on knowledge of the internal workings
The Automation Firehose: Be Strategic and Tactical by Thomas HaverQA or the Highway
The document discusses strategies for automating software testing. It emphasizes taking a risk-based approach to determine what to automate based on factors like frequency of use, complexity, and legal risk. The document provides recommendations for test automation best practices like treating automated test code like development code, using frameworks and tools to standardize coding practices, and prioritizing unit and integration testing over UI testing. It also discusses challenges that can arise with test automation like flaky tests, long test execution times, and keeping automation in sync with changing software. Metrics for measuring the effectiveness of test automation are presented, like test coverage, defect findings and trends, and time savings.
The document discusses various software testing techniques including white box testing and black box testing. It provides details on test cases, test suites, and testing conventional applications. Specifically:
- It describes white box and black box testing techniques, and explains that white box tests the implementation while black box tests only the functionality.
- It defines what a test case is and lists typical parameters for a test case like ID, description, test data, expected results. It provides an example test case.
- It explains that a test suite is a container that holds a set of tests and can be in different states. A diagram shows the relationship between test plans, test suites and test cases.
- It discusses unit testing and
PREDICT THE FUTURE , MACHINE LEARNING & BIG DATADotNetCampus
Scopri come utilizzare Azure Machine Learning, un servizio cloud che consente alle aziende, università, centri di ricerca e sviluppatori di incorporare e sfrutturare nelle loro applicazioni funzionalità di apprendimento automatico e analisi predittiva su enormi set di dati. Tramite Azure ML Studio possiamo creare, testare, attuare e gestire soluzioni di analisi predittiva e apprendimento automatico nel cloud tramite un qualunque web browser. Durante la sessione si darà un saggio attraverso un esempio di analisi predittiva sul Flight Delay.
The document provides an overview of Azure Machine Learning and discusses machine learning concepts. It begins with introducing the speaker and providing an agenda. It then defines machine learning and contrasts traditional programming with machine learning. Different types of learning methods like supervised and unsupervised learning are also introduced. Finally, it demonstrates the Azure Machine Learning workflow and some common machine learning algorithms available in Azure.
Enabling Automated Software Testing with Artificial IntelligenceLionel Briand
1. The document discusses using artificial intelligence techniques like machine learning and natural language processing to help automate software testing. It focuses on applying these techniques to testing advanced driver assistance systems.
2. A key challenge in software testing is scalability as the input spaces and code bases grow large and complex. Effective automation is needed to address this challenge. The document describes several industrial research projects applying AI to help automate testing of advanced driver assistance systems.
3. One project aims to develop an automated testing technique for emergency braking systems in cars using a physics-based simulation. The goal is to efficiently explore complex test scenarios and identify critical situations like failures to avoid collisions.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
DevOps Consulting Company | Hire DevOps Servicesseospiralmantra
Spiral Mantra excels in providing comprehensive DevOps services, including Azure and AWS DevOps solutions. As a top DevOps consulting company, we offer controlled services, cloud DevOps, and expert consulting nationwide, including Houston and New York. Our skilled DevOps engineers ensure seamless integration and optimized operations for your business. Choose Spiral Mantra for superior DevOps services.
https://www.spiralmantra.com/devops/
More Related Content
Similar to Towards the Third Wave of Software Testing
- The document summarizes a presentation given by Andy Zaidman at the International Conference on Automation of Software Test (AST 2023) in Melbourne, Australia on May 16th, 2023.
- It discusses findings from studies on how developers engineer test cases and their testing behaviors in IDEs, including strategies like being guided by documentation or code.
- It also presents recommendations to improve developer testing through better tool support, clear adequacy criteria in education, and a focus on improving the user and developer experience of testing tools and processes.
This document provides an overview of software testing concepts. It discusses different types of testing like unit testing, functional testing, error path testing, boundary value testing, and equivalence partitioning. It also covers test strategies like golden path testing, black box testing, and white box testing. The purpose of a tester is explained as quantifying risk to make decisions that improve confidence and quality.
The document discusses various aspects of software testing such as test cases, test plans, test scenarios, testworthy criteria, testing types including functional, non-functional, manual and automated testing. It also covers topics like traceability matrix, test automation frameworks, fuzzing, mutation testing and references various standards and research papers related to software testing.
This is an introductory workshop for machine learning. Introduced machine learning tasks such as supervised learning, unsupervised learning and reinforcement learning.
The document discusses various software testing techniques. It covers the objectives of testing as finding errors and having a high probability of discovering undiscovered errors. It describes different types of testing like white-box testing, which tests internal logic and paths, and black-box testing, which tests external functionality. Specific techniques covered include basis path testing, equivalence partitioning, boundary value analysis, and graph-based testing methods. The importance of testability, traceability, simplicity, and understandability are emphasized.
utPLSQL offers a unit testing API for PL/SQL that is modeled on the xUnit approach. This is an old slide deck on utPLSQL so my apologies for any inconsistencies with the current utility. Note: while I created the original utPLSQL code base, I am not actively working on utPLSQL at this time. Check out github.com/utplsql for the code and project details.
This document discusses search-based testing and its applications in software testing. It outlines some key strengths of search-based software testing (SBST) such as being scalable, parallelizable, versatile, and flexible. It also discusses some limitations of search-based approaches for problems that require formal verification to establish properties for all possible usages. The document compares classical optimization approaches, which build solutions incrementally, to stochastic optimization approaches used in SBST, which sample solutions in a randomized way. It notes that while testing can find bugs, it cannot prove their absence. Finally, it discusses how SBST can be combined with other techniques like constraint solving and machine learning.
This document analyzes different model validation techniques (MVTs) used to estimate the performance of defect prediction models. It finds that out-of-sample bootstrap validation produces the least biased performance estimates while ordinary bootstrap validation produces the most stable estimates. Considering both bias and variance, techniques like ordinary bootstrap and out-of-sample bootstrap perform best by providing a balance of low bias and variance in their performance estimates.
The document discusses software testing objectives, principles, techniques and processes. It covers black-box and white-box testing, unit and integration testing, and challenges of object-oriented testing. Testing aims to find bugs but can never prove their absence. Exhaustive testing is impossible so testing must be planned and systematic. Frameworks like xUnit can help automate unit testing.
The document discusses various software testing strategies and techniques:
1. Testing is the process of finding errors in a program before delivering it to end users. It shows errors, tests requirements conformance, and is an indication of quality.
2. Testing begins with unit testing individual components, then progresses to integration testing of components working together, validation testing against requirements, and system testing in the full system context.
3. White-box testing aims to ensure all statements and conditions are executed at least once, while black-box testing treats the software as a "black box" without viewing internal logic or code.
It’s Time to Automate Your Exploratory TestingTechWell
This document summarizes a presentation about automating exploratory testing. The presentation discusses what exploratory testing is, the drivers for conducting exploratory tests, and different types of exploratory tests that can be automated, including tests involving data injection, navigation, and timings. It also provides a case study example of how a company automated exploratory tests by defining workflows and possible paths through the system being tested and automating tours of the modeled test paths.
Hands-on Experience Model based testing with spec explorer Rachid Kherrazi
This document discusses model based testing using Spec Explorer. It begins with an introduction to model based testing and its benefits over traditional testing. It then demonstrates modeling a simple "Say Hello" application in Spec Explorer, including generating test cases from the model and executing them. Key benefits of model based testing include avoiding integration issues and post-release defects by more thoroughly testing interactions between units.
Today, organisations strive to implement efficient CI/CD for their products. However, one aspect is slowing them down: test automation. It is often difficult to justify this investment, and building a convincing business case for automation is not always easy. A comprehensive business case is crucial not only to ensure funding for automated testing, but also to enable business support for a testing transformation. In this session, Eran Kinsbruner, Chief Evangelist and Author at Perfecto, will present how to build a compelling business case for automation and the criteria needed for a successful transformation to automated testing. He will also focus on the key metrics that need to be baked into the ROI calculator, and cost-saving examples for implementing test automation while considering AI and ML capabilities for test creation and analysis.
This document summarizes a seminar presentation on software testing. It discusses:
- The importance of testing in finding errors and making software more reliable
- How testing consumes the largest effort in software development
- The key concepts of testing including test cases, test suites, errors, and failures
- The different levels of testing like unit, integration, system, and acceptance testing
- Techniques for white box, black box, and grey box testing based on knowledge of the internal workings
The Automation Firehose: Be Strategic and Tactical by Thomas HaverQA or the Highway
The document discusses strategies for automating software testing. It emphasizes taking a risk-based approach to determine what to automate based on factors like frequency of use, complexity, and legal risk. The document provides recommendations for test automation best practices like treating automated test code like development code, using frameworks and tools to standardize coding practices, and prioritizing unit and integration testing over UI testing. It also discusses challenges that can arise with test automation like flaky tests, long test execution times, and keeping automation in sync with changing software. Metrics for measuring the effectiveness of test automation are presented, like test coverage, defect findings and trends, and time savings.
The document discusses various software testing techniques including white box testing and black box testing. It provides details on test cases, test suites, and testing conventional applications. Specifically:
- It describes white box and black box testing techniques, and explains that white box tests the implementation while black box tests only the functionality.
- It defines what a test case is and lists typical parameters for a test case like ID, description, test data, expected results. It provides an example test case.
- It explains that a test suite is a container that holds a set of tests and can be in different states. A diagram shows the relationship between test plans, test suites and test cases.
- It discusses unit testing and
PREDICT THE FUTURE , MACHINE LEARNING & BIG DATADotNetCampus
Scopri come utilizzare Azure Machine Learning, un servizio cloud che consente alle aziende, università, centri di ricerca e sviluppatori di incorporare e sfrutturare nelle loro applicazioni funzionalità di apprendimento automatico e analisi predittiva su enormi set di dati. Tramite Azure ML Studio possiamo creare, testare, attuare e gestire soluzioni di analisi predittiva e apprendimento automatico nel cloud tramite un qualunque web browser. Durante la sessione si darà un saggio attraverso un esempio di analisi predittiva sul Flight Delay.
The document provides an overview of Azure Machine Learning and discusses machine learning concepts. It begins with introducing the speaker and providing an agenda. It then defines machine learning and contrasts traditional programming with machine learning. Different types of learning methods like supervised and unsupervised learning are also introduced. Finally, it demonstrates the Azure Machine Learning workflow and some common machine learning algorithms available in Azure.
Enabling Automated Software Testing with Artificial IntelligenceLionel Briand
1. The document discusses using artificial intelligence techniques like machine learning and natural language processing to help automate software testing. It focuses on applying these techniques to testing advanced driver assistance systems.
2. A key challenge in software testing is scalability as the input spaces and code bases grow large and complex. Effective automation is needed to address this challenge. The document describes several industrial research projects applying AI to help automate testing of advanced driver assistance systems.
3. One project aims to develop an automated testing technique for emergency braking systems in cars using a physics-based simulation. The goal is to efficiently explore complex test scenarios and identify critical situations like failures to avoid collisions.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
DevOps Consulting Company | Hire DevOps Servicesseospiralmantra
Spiral Mantra excels in providing comprehensive DevOps services, including Azure and AWS DevOps solutions. As a top DevOps consulting company, we offer controlled services, cloud DevOps, and expert consulting nationwide, including Houston and New York. Our skilled DevOps engineers ensure seamless integration and optimized operations for your business. Choose Spiral Mantra for superior DevOps services.
https://www.spiralmantra.com/devops/
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
🏎️Tech Transformation: DevOps Insights from the Experts 👩💻campbellclarkson
Connect with fellow Trailblazers, learn from industry experts Glenda Thomson (Salesforce, Principal Technical Architect) and Will Dinn (Judo Bank, Salesforce Development Lead), and discover how to harness DevOps tools with Salesforce.
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
A Comprehensive Guide on Implementing Real-World Mobile Testing Strategies fo...kalichargn70th171
In today's fiercely competitive mobile app market, the role of the QA team is pivotal for continuous improvement and sustained success. Effective testing strategies are essential to navigate the challenges confidently and precisely. Ensuring the perfection of mobile apps before they reach end-users requires thoughtful decisions in the testing plan.
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Manyata Tech Park Bangalore_ Infrastructure, Facilities and Morenarinav14
Located in the bustling city of Bangalore, Manyata Tech Park stands as one of India’s largest and most prominent tech parks, playing a pivotal role in shaping the city’s reputation as the Silicon Valley of India. Established to cater to the burgeoning IT and technology sectors
Manyata Tech Park Bangalore_ Infrastructure, Facilities and More
Towards the Third Wave of Software Testing
1. Towards the Third Wave of
Software Testing
Bestoun S. Ahmed
(Ph.D. Software Engineering)
Department of Mathematics and Computer Science
Karlstad University, Sweden
Phone: +46 54 700 1861
Room: 21D 413
bestoun@kau.se
Docent Lecture
Karlstad 8 April 2021
2. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Software Testing
• Formal definition (IEEE): Testing, is an activity performed for evaluating product quality, and for improving
it, by identifying defects and problems.
• Software failures can lead to disastrous consequences
2
•Loss of data …..
• Loss of fortune …..
• Loss of lives …..
5. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
• Bringing automation to let the computer test your software.
4
From Manual to Automated Testing
Test
Results
System
under
Test
Test
Execution
Requirements
Test Plan
Test
Design
Test Cases
6. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
• Bringing automation to let the computer test your software.
4
From Manual to Automated Testing
Test
Results
System
under
Test
Test
Execution
Requirements
Test Plan
Test
Design
Test Cases
7. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
• Bringing automation to let the computer test your software.
4
From Manual to Automated Testing
Test
Results
System
under
Test
Test
Execution
Requirements
Test Plan
Test
Design
Test Cases
Requirements
Test
Results
Test
Execution
System
under
Test
Capture/ReplayTool
Test Plan
Test Cases
Test
Design
Test
Scripts
8. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
• Bringing automation to let the computer test your software.
4
From Manual to Automated Testing
Test
Results
System
under
Test
Test
Execution
Requirements
Test Plan
Test
Design
Test Cases
Requirements
Test
Results
Test
Execution
System
under
Test
Capture/ReplayTool
Test Plan
Test Cases
Test
Design
Test
Scripts
9. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
• Bringing automation to let the computer test your software.
4
From Manual to Automated Testing
Test
Results
System
under
Test
Test
Execution
Requirements
Test Plan
Test
Design
Test Cases
Requirements
Test
Results
Test
Execution
System
under
Test
Capture/ReplayTool
Test Plan
Test Cases
Test
Design
Test
Scripts
29. Moving forward to Model-basedTesting and AI
The second wave of testing
30. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Automated Test Generation
• We want to try all possible inputs, and see what happened, but ..
• Exhaustive testing is impossible… So, we want to be smart and try the important inputs.
9
Input
Software
SoftwareTesting in Principle
Find faults
Measure the coverage of path, lines, structure … etc.
Cover combinations of inputs
Exercise the user behaviour
Measure any quality characteristics
Goal of
Testing
Measure
31. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
The Search Problem
• With the smallest set of inputs (test suite), I want to:
- Cover all combinations of t-inputs (t-wise)
- Cover most of the possibilities of input categories
- Cover most of the code branches
- Confirm that a property is working as the customer want.
- … etc.
10
Measure
How many faults we found?
How many branches, or Lines of Code we covered?
How many combinations we covered?
Goal of
Testing
…
Search & Optimise
Test suite
32. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
The Measure (Objective Function)
• If we can measure a goal of testing, we
can assume it as an objective function to
be satisfied while we generate a test
suite.
• This is the base of automated test
generation.
• We can use heuristics to search for best
solution automatically.
11
Goal of
Testing
Satisfied?
Search
Best Achieved
Test Suite
No
Yes
• If we can measure a goal of testing, we can assume it as an
objective function to be satisfied while we generate a test suite.
33. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
34. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
Iteration I
35. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
Iteration I
36. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Iteration I
37. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
Iteration I
38. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
Iteration I
39. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
Iteration I
Iteration I+1
40. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Iteration I+1
41. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Manipulate according to best achievement so far
Iteration I+1
42. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Manipulate according to best achievement so far
Iteration I+1
43. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Manipulate according to best achievement so far
Iteration I+1
Final Iteration
44. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Manipulate according to best achievement so far
Iteration I+1
Final Iteration
45. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Manipulate according to best achievement so far
In1 In2 … Int
Iteration I+1
Final Iteration
46. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Manipulate according to best achievement so far
In1 In2 … Int
Iteration I+1
Final Iteration
Best achievement test suite
47. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Manipulate according to best achievement so far
In1 In2 … Int
Iteration I+1
Final Iteration
Best achievement test suite
48. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
How it works
• Generate a random test suite with all possible test
cases.
• Score each test case based on the fitness function
(i.e., the measure).
• Update the test suite based on the best achieved test
suite so far.
• Iterate until no best test suite can be found.
• The update mechanism comes from the metaheuristc
will be used.
12
In1 In2 … Int
Generate random test set
In1 In2 … Int
Iteration I
Manipulate according to best achievement so far
In1 In2 … Int
Iteration I+1
Final Iteration
Best achievement test suite
61. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
New Testing Methodologies (1)
• Using code structure to
generate the test cases.
• Mutation testing to generate
test cases.
• Process testing to consider
path-based test selection
• Business model testing.
• Avocado: fully automated test
framework
15
71. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Automation in GUI Testing
• Automation via model-based testing.
• Model the even interaction between the user and the GUI application.
• In the graph model we try to catch all the executed sequence of the GUI under test by modelling all possible events using the event
flow graph (EFG).
• The events that open or close menus, or open windows are considered which are not actually a testing event rather than they are
event to start the actual events for testing.
18
• To overcome this issue, a refinement GUI
modelling has been recently developed called
event interaction graph (EIG)
GUI EFG
EIG
Ref. Bestoun S. Ahmed, Mouayad A. Sahib, and Moayad Y. Potrus, ” Generating Combinatorial Test Cases Using Simplified Swarm Optimization (SSO) Algorithm for Automated GUI Functional
Testing“, Engineering Science and Technology, an International Journal, 17 (4), Pages 218-226 , (2014) Elsevier
72. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Automation Mobile App Testing
• Using revers engineering approach to generate UI model for the mobile apps.
• Most of these tools are typically based on dynamic approaches where an application is
dynamically analysed at the run-time to extract information.
• Tool example, Android GUITAR, Android GUI Ripper, MCrawlT, and test automation
system.
• Most of these tools are pure black-box techniques that perform dynamic analysis of
applications.
19
Ref. Ibrahim-Anka Salihu, Rosziati Ibrahim, Bestoun S. Ahmed, Kamal Z. Zamli, and Asmau Usman, ” AMOGA: A Static-Dynamic Model Generation Strategy for Mobile Apps Testing“, IEEE Access, 2019, Vol. 7,
Issue 1, PP. 17158-17173
73. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Automation in Web app Testing
• Using exploratory testing as a replacement of crawlers to construct a model fore
the web app.
• An exploration of the parts of the SUT that have not been previously explored and
tested.
• Tapir framework: uses model reconstruction of the system under test.
20
• Tapir Browser Extension: n
tracks a tester’ s activity in the
SUT and sends the required
information to the Tapir
headquarters (HQ) component.
• Tapir HQ: implemented as a
standalone web application that
guides the tester through the SUT,
provides navigational test cases.
• Tapir Analytics: This component
enables the visualisation of the
current state of the SUT model.
Ref. Miroslav Bures, Karel Frajtak, Bestoun S. Ahmed, “Tapir: Automation Support of Exploratory Testing Using Model Reconstruction of the System Under Test“, IEEE Transactions on
Reliability, Volume: 67, Issue:2, Pages: 557-580, (2018)
75. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Automation in Smart TV app Testing
• There is an urgent need to have a testing framework for smart TV apps.
• The framework constructs a model for a given smart TV app cumulatively by exercising the user interface and extracting the
runtime information using the remote control device interaction with the app.
• Black-box reverse engineering stage used to generate a comprehensive model for the app-under-test.
• Using the FSM and directed graphs as a model to represent the widgets and events on the GUI leads to a graph with many
nodes and edges.
22
Ref. Bestoun S. Ahmed and Miroslav Bures, “EvoCreeper: Automated Black-Box Model Generation for Smart TV Applications” Accepted for publication in IEEE Transactions on Consumer
Electronics, 2019
77. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Automated Smart TV app Testing - Mega and Sub models
• Mega model is the first model constructed after automated crawler using the remote control device.
• Sub-models generated automatically based on user customisation to separate the sub-model from the mega model.
• This will support auto model-based prioritisation.
24
Mega-model Sub-models
84. Moving forward from Application to System-
levelTesting
Towards the third wave of testing
85. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
IoT System Testing
• Testing in Scale.
• The need for new metrics to be considered for testing.
• What are the quality attributes to be considered?
• The relation between the software application and the system
level attributes.
27
98. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Data Science and Software Engineering
• Data-centric challenges:
• Incomplete Data Gathering
• Inconsistent Data Storage
• Misleading Decision Making
• Identifying anomality in data
30
Software development lifecycle
99. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Data Science and Software Engineering
• Data-centric challenges:
• Incomplete Data Gathering
• Inconsistent Data Storage
• Misleading Decision Making
• Identifying anomality in data
30
Software development lifecycle
• ML-centric challenges:
• ML model quality
• Correctness verification.
• Model degradation
• Test generation for ML
100. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
AI Engineering
• How to engineering the development process?
• Seeing AI systems as Software Systems in production.
• Following the same engineering approach for
development.
31
Fig. 4. Research agenda for AI engineering
*Graph Source: J. Bosch, I. Crnkovic, H. Holmström Olsson, Engineering AI Systems:A Research Agenda,Artificial Intelligence Paradigms for Smart Cyber-Physical Systems
101. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
AI Engineering and Software Testing
32
*Graph Source: Machine Learning in Production, https://ckaestne.github.io/seai/
102. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
AI Engineering and Software Testing
32
*Graph Source: Machine Learning in Production, https://ckaestne.github.io/seai/
103. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
AI Engineering and Software Testing
32
*Graph Source: Machine Learning in Production, https://ckaestne.github.io/seai/
104. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
AI Engineering and Software Testing
32
*Graph Source: Machine Learning in Production, https://ckaestne.github.io/seai/
105. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
AI Engineering and Software Testing
32
*Graph Source: Machine Learning in Production, https://ckaestne.github.io/seai/
ClassicalV&V for software Systems
106. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
AI Engineering and Software Testing
32
*Graph Source: Machine Learning in Production, https://ckaestne.github.io/seai/
ClassicalV&V for software Systems
107. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
AI Engineering and Software Testing
32
*Graph Source: Machine Learning in Production, https://ckaestne.github.io/seai/
ClassicalV&V for software Systems
108. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Our Vision
• Data verification and training data evolvability
• ML decision making correctness and algorithm testing
• Testing for ML model degradation.
• Training Data Evolvability - test the training data against the used model
• Quality of the data: Insufficient data, irrelevant features, non-representative
training data, overfitting, under-fitting, outliers.
33
109. Computer Science
KARLSTAD UNIVERSITY
Department of Computer Science
Bestoun S. Ahmed, Ph.D.
Evolution of Test Automation
• The testing tools and technologies are evolving very fast.
• The systems to be tested will change, there is a new direction to develop new techniques for testing
- Cloud services, edge computing …
- Data driven systems
- Machine learning systems
• Cloud testing more to be used as a cheap resource and easily paralleled.
• Security testing become more critical.
• More use of dashboards for test diagnosis and test reuse.
• Testing in internet scale, building systems of systems and distributed systems.
34
*Graph Source: https://www.testim.io/blog/ai-transforming-software-testing/