9. What Makes AI Relevant Today (2/2) ?
• You tube has 2 billion monthly users and 800M daily active users.
• 500 hours of video is uploaded per minute on you tube.
• Twitter has 145M daily active users with 500M tweets generated per day.
• Instagram has 500M daily users with 1000 photos generated per second.
• FB has 1.84 billion daily users, 184M uploading photos/texts.
• Swiggy clocks 1.4 millions food order daily.
• NetFlix has 200M subscribers watching with 164Millions hours per day being watched.
12. 1.We can represent people (and things) as vectors of numbers (which is great for machines!).
2.We can easily calculate how similar vectors are to each other.
Personality Embeddings (3/3)
13. What AI can do
Corelation between the parameters – How strong or weak are parameters
related ?
Predict ahead in the time - what is next value at time ‘t’ in the future ?
Predict the value from other values – this is like solving the equations ?
Find the problem in the data all together – anomalies – Example of credit
card transation
Predict / Generate the next word
Compare words, Statements and Paragraphs and documents
14. Generative AI
Text Based – GPT, Llama-2 ( Stories, Movies etc)
Image Based – DALL-E, Mid Journey
15. What is AI in Testing?
• AI in the context of software testing.
• How AI can mimic human-like testing activities, including test case generation, execution, and analysis.
Test Case Generation
Requirements
Test Analysis
Test Data
Test Execution
Mimic Users
16. Types of AI in Testing
• Different types of AI techniques used in testing, such as machine learning, natural language processing,
and computer vision.
• How each type can be applied in testing scenarios.
Machine Learning (ML) in Testing
Example: Anomaly Detection in Log Files
Natural Language Processing (NLP) in Testing:
Example: Test Case Natural Language Understanding
Computer Vision in Testing:
Example: UI Testing and Visual Regression Testing
17. Benefits of AI in Testing
• Improved test coverage
• Faster test execution
• Early defect detection
• Reduction in testing costs
• Enhanced test case generation
• Continuous testing capabilities
18. Challenges and Limitations
• Lack of domain knowledge
• Data quality issues
• Initial setup and training efforts
• Ethical concerns
19. Future Trends
• AI for test data generation
• AI-powered test automation
• AI-driven test prioritization
• AI in security testing
20. Best Practices
• Start with a clear strategy
• Select the right AI tools and technologies
• Invest in training and upskilling
• Continuously monitor and adapt your AI testing strategy
21. Ethical Considerations
1.Bias and Fairness:
1. Issue: AI models can inherit biases from training data, leading to unfair treatment of certain groups.
2. Mitigation: Regularly audit and retrain models to minimize biases. Use diverse and representative training data.
2.Privacy:
1. Issue: AI may inadvertently expose sensitive user data during testing.
2. Mitigation: Ensure that AI testing tools comply with data protection regulations (e.g., GDPR). Anonymize or
pseudonymize data whenever possible.
3.Transparency and Accountability:
1. Issue: AI decision-making can be opaque, making it challenging to understand how and why certain testing
outcomes occur.
2. Mitigation: Strive for transparency in AI algorithms and decision-making processes. Maintain clear documentation
of AI testing methodologies.
4.Data Privacy and Security:
1. Issue: Collecting, storing, and managing test data may expose vulnerabilities.
2. Mitigation: Implement robust data security measures, including encryption and access controls, to protect
sensitive testing data.
5.Job Displacement:
1. Issue: Widespread adoption of AI in testing may raise concerns about job displacement for manual testers.
2. Mitigation: Focus on upskilling testers to work alongside AI tools, emphasizing critical thinking, test strategy, and
ethical considerations.
6.Adherence to Regulations:
1. Issue: Failure to comply with industry-specific regulations and standards can lead to legal and financial
repercussions.
2. Mitigation: Stay informed about relevant industry regulations and ensure that AI testing practices align with
compliance requirements.
7.Algorithmic Accountability:
1. Issue: When AI systems make mistakes, it can be challenging to assign responsibility.
22. Conclusion
AI is a transformative force that is reshaping the way we ensure software quality
Embracing AI in testing is not just a choice; it's a strategic imperative for organizations seeking to deliver high-
quality software efficiently and competitively.
Editor's Notes
Definition: AI in test case generation involves using machine learning algorithms to automatically create test cases based on the analysis of code, requirements, and historical testing data.
Explanation: AI can analyze code and identify potential edge cases, boundary conditions, and combinations of inputs that humans might overlook. For instance, consider a banking application. AI can generate test cases that cover scenarios like testing credit card transactions with different currencies, various account types, and complex interest rate calculations.
Definition: AI in test execution refers to the use of automation tools enhanced by AI capabilities to execute test scripts and detect defects.
Explanation: AI-powered test automation tools can simulate user interactions with the software, mimicking human clicks, inputs, and navigation. These tools can also adapt to changes in the user interface, making them efficient for continuous testing. For example, an e-commerce website can use AI-driven test automation to simulate user journeys, ensuring that products can be added to the cart, payment processed, and orders tracked seamlessly.
Definition: AI in test analysis involves using machine learning and natural language processing to analyze test results and identify patterns, anomalies, and potential issues.
Explanation: When executing a large number of test cases, AI can quickly sift through the results to pinpoint areas that require attention. For instance, consider a healthcare application. AI can analyze test data and identify patterns of incorrect patient records, suggesting potential data integrity issues that might go unnoticed in a manual review.
Application: ML can be used to analyze log files generated during the testing and operation of software applications. By learning normal patterns and behaviors, ML models can detect anomalies or unusual events that may indicate software defects or security breaches. For instance, ML can identify irregular spikes in server response times, indicating potential performance issues.
Explanation: ML algorithms can learn from historical log data and create a baseline for normal system behaviour. When new log data is analysed, any deviations from the baseline can trigger alerts for further investigation.
Application: NLP can be used to interpret and analyze natural language test cases written by testers. Test case descriptions and requirements often contain ambiguities and variations in language. NLP can help standardize and understand these test cases for automated testing.
Explanation: NLP algorithms can parse test case descriptions, extract key information, and convert them into machine-readable formats. This enables test automation tools to execute test cases more accurately based on the natural language input.
Application: Computer vision can be applied to user interface (UI) testing, where AI algorithms "see" and interact with the application's graphical elements. It's used for comparing screenshots to detect visual differences (Visual Regression Testing) and automating UI interactions.
Explanation: Computer vision algorithms can locate UI elements such as buttons, text fields, and images on a screen, and then perform actions like clicking, typing, or swiping. Visual Regression Testing compares expected and actual screenshots to identify visual discrepancies that may result from code changes.