This document summarizes a presentation on high-volume automated testing (HiVAT). Cem Kaner and Carol Oliver will present on techniques for doing HiVAT testing, including examples implemented in Ruby code. They will describe three HiVAT techniques - functional equivalence testing, long-sequence regression testing, and a more flexible HiVAT architecture. The presentation will cover the basic ingredients needed for HiVAT, examples of the techniques, and ideas for making HiVAT work in practice.
Static Analysis Techniques For Testing Application Security - Houston Tech FestDenim Group
Static Analysis of software refers to examining source code and other software artifacts without executing them. This presentation looks at how these techniques can be used to identify security defects in applications. Approaches examined will range from simple keyword search methods used to identify calls to banned functions through more sophisticated data flow analysis used to identify more complicated issues such as injection flaws. In addition, a demonstration will be given of two freely-available static analysis tools: FXCop and the beta version of Microsoft’s XSSDetect tool. Finally, some approaches will be presented on how organizations can start using static analysis tools as part of their development and quality assurance processes.
Zuci Systems with its proprietary testing products ZUBOT and SHABD provide the best software quality assurance. The best software QA is provided by lean and agile regression testing and automated software testing. Continuous testing with methodologies like Iterative, waterfall make Zuci systems the best software testing company.
The importance and seriousness of Software Testing is well known. Much has been discussed and evaluated and the bottom line is that mistakes happen generally because humans tend to overlook possibilities and probable errors. A software not tested with due seriousness can lead to major blunders putting the clients of the systems into a tight spot and resulting in nightmares of sort. It is then only prudent and wise to analyze and predict errors and conduct timely rectifications to avoid any embarrassing situations in the future and to deliver a stable and reliable system.
Like many other things, there are Myths surrounding Software Testing Services, but Facts remain Facts.
Read More At: http://softwaretestingsolution.com/blog/the-myths-and-facts-surrounding-software-testing/
Static Analysis Techniques For Testing Application Security - Houston Tech FestDenim Group
Static Analysis of software refers to examining source code and other software artifacts without executing them. This presentation looks at how these techniques can be used to identify security defects in applications. Approaches examined will range from simple keyword search methods used to identify calls to banned functions through more sophisticated data flow analysis used to identify more complicated issues such as injection flaws. In addition, a demonstration will be given of two freely-available static analysis tools: FXCop and the beta version of Microsoft’s XSSDetect tool. Finally, some approaches will be presented on how organizations can start using static analysis tools as part of their development and quality assurance processes.
Zuci Systems with its proprietary testing products ZUBOT and SHABD provide the best software quality assurance. The best software QA is provided by lean and agile regression testing and automated software testing. Continuous testing with methodologies like Iterative, waterfall make Zuci systems the best software testing company.
The importance and seriousness of Software Testing is well known. Much has been discussed and evaluated and the bottom line is that mistakes happen generally because humans tend to overlook possibilities and probable errors. A software not tested with due seriousness can lead to major blunders putting the clients of the systems into a tight spot and resulting in nightmares of sort. It is then only prudent and wise to analyze and predict errors and conduct timely rectifications to avoid any embarrassing situations in the future and to deliver a stable and reliable system.
Like many other things, there are Myths surrounding Software Testing Services, but Facts remain Facts.
Read More At: http://softwaretestingsolution.com/blog/the-myths-and-facts-surrounding-software-testing/
In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters.
La charla está enfocada en una herramienta de análisis de código estático, la cuál se encuentra en desarrollo actualmente, enfocada específicamente en la búsqueda de vulnerabilidades, en vez de centrarse en errores típicos de programación como las más populares herramientas de análisis de código tales como Coverity o Klockwork. Durante el transcurso de la misma se irá dando toda la base necesaria para entender el funcionamiento de estas herramientas, la diferencia entre herramientas para buscar bugs y vulnerabilidades así como la parte que el ponente considera fundamental de dar interactividad a este tipo de herramientas.
Al final de la charla se mostrará una pequeña demo de la herramienta actual y algunos fallos/vulnerabilidades encontrados gracias a la misma.
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...TEST Huddle
Acceptance test driven development (ATDD) is an important agile practice merging requirement gathering with acceptance testing. In its core are concrete examples, created together with the team, that provide collaborative understanding and, as automated acceptance tests, make sure that the features are implemented correctly. There are many ways to create ATDD examples/tests, and the behavior driven development (BDD) style with Given-When-Then format is one of the more popular ones.
Robot Framework is an open source test automation framework suitable for ATDD and acceptance testing in general. It has a flexible test data syntax that supports keyword-driven, data-driven, and BDD styles, but is still simple enough so that also non-programmers can create and understand test cases. The simple test library API makes extending the framework easy, and there are several ready made libraries that allow testing generic interfaces such as web, databases, Swing, SWT, Windows GUIs, Flex, and SSH out-of-the-box.
This presentation gives an introduction both to ATDD and Robot Framework. It contains different demonstrations and
all the material will be freely available after the presentation.
Continuous Automated Regression Testing to the RescueTechWell
A major concern when developing new software features is that another part of the code will be affected in unexpected ways. With a typical development processes, testers often do not run a full set of product regression tests until late in the release when it is much more costly to fix and retest the product. Continuous automated regression testing to the rescue! Brenda Kise describes the team, project, and organization value and benefits of continuously performing automated regression tests throughout the development process. Brenda explains how this practice saves time and money in the long-run because the team and stakeholders gain an ongoing understanding of the quality of the code base every time a new build becomes available. Brenda describes the different approaches for introducing the practices of continuous automated regression testing into your organization. Find out how to create your immediate feedback mechanism to highlight the new code that creates regression defects.
Testing- Fundamentals of Testing-Mazenet solutionMazenetsolution
For Youtube Videos: bit.do/sevents
Why testing is necessary,Fundamental test process, Psychology of testing, Re-testing and regression testing,
Expected results,Prioritisation of tests
Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. ... The high complexity of software is the major contributing factor of Software Reliability problems.
The ultimate objective of a DevOps approach is to deliver quality products to your customers as efficiently as possible. DevOps shops that achieved this state point to continuous testing as a key contributor to their success. However, QA and testing have become forgotten orphans in the DevOps journey of many organizations. For groups that have incorporated testing, many have a release cadence that resembles something more like waterfall. The culprit is often the inability to incorporate stable automation into their testing practices.
In this talk, Lee will discuss how organizations can address these issues, and move towards continuous testing within their DevOps practices. Specifically, the discussion will touch on key practices and methods for implementing effective test automation in a DevOps pipeline including test suite & test scope, automation approach/methods, and test environment and data management.
The Art of Testing Less without Sacrificing Quality @ ICSE 2015Kim Herzig
Testing is a key element of software development processes for the management and assessment of product quality. In most development environments, the software engineers are responsible for ensuring the functional correctness of code. However, for large complex software products, there is an additional need to check that changes do not negatively impact other parts of the software and they comply with system constraints such as backward compatibility, performance, security etc. Ensuring these system constraints may require complex verification infrastructure and test procedures. Although such tests are time consuming and expensive and rarely find defects they act as an insurance process to ensure the software is compliant. However, long lasting tests increasingly conflict with strategic aims to shorten release cycles. To decrease production costs and to improve development agility, we created a generic test selection strategy called THEO that accelerates test processes without sacrificing product quality. THEO is based on a cost model, which dynamically skips tests when the expected cost of running the test exceeds the expected cost of removing it. We replayed past development periods of three major Microsoft products resulting in a reduction of 50% of test executions, saving millions of dollars per year, while maintaining product quality.
Automated Software Testing Framework Training by Quontra SolutionsQuontra Solutions
Learn through Experience -- We differentiate our training and development program by delivering Role-Based training instead of Product-based training. Ultimately, our goal is to deliver the best IT Training to our clients.
In this training, attendees learn:
Introduction to Automation
• What is automation
• Advantages of automation & Disadvantages of automation
• Different types of Automation Tools
• What to automate in projects
• When to start automation. Scope for automation testing in projects
• About open-source automation tools
Introduction to Selenium
• What is selenium
• Why selenium
• Advantage and Disadvantages of selenium
Selenium components
• Selenium IDE
• Selenium RC
• Selenium WebDriver
• Selenium Grid
Selenium IDE
• Introduction to IDE
• IDE Installation
• Installation and uses of Firepath, Firebug & Debug bar
• Property & value of elements
• Selenium commands
• Assertions & Verification
• Running, pausing and debugging script
• Disadvantages of selenium IDE
• How to convert selenium IDE Scripts into other languages
Locators
• Tools to identify elements/objects
• Firebug
• IE Developer tools
• Google Chrome Developer tools
• Locating elements by ID
• Finding elements by name
• Finding elements by link text
• Finding elements by XPath
• Finding Elements by using CSS
• Summary
Selenium RC
• What is selenium RC
• Advantages of RC, Architecture
• What is Eclipse/IntelliJ, Selenium RC configure with Eclipse/IntelliJ
• Creating, running & debugging RC scripts
Java Concepts
• Introduction to OOPs concepts and Java
• Installation: Java, Eclipse/IntelliJ, selenium, TestNg/JUnit
• operators in java
• Data types in java
• Conditional statements in java
• Looping statements in java
• Output statements in java
• Classes & Objects
• Collection Framework
• Regular Expressions
• Exception Handling
• Packages, Access Specifiers /Modifiers
• String handling
• Log4J for logging
Selenium Web Driver with Java
• Introduction to WebDriver
• Advantages
• Different between RC and WebDriver
• Selenium WebDriver- commands
• Generate scripts in Eclipse/IntelliJ. Run Test Scripts.
• Debugging Test Script
• Database Connections
• Assertions, validations
• Working with Excel
• Pass the data from Excel
• Working with multiple browser
• Window Handling, Alert/confirm & Popup Handling
• Mouse events
• Wait mechanism
• Rich Web Handling: Calendar handing, Auto suggest, Ajax, browser forward/back navigation, keyboard events, certificate handling, event listeners
TestNg/JUnit Framework
• What is TestNg/JUnit
• Integrate the Selenium Scripts and Run from TestNg/JUnit
• Reporting Results and Analysis
• Run Scripts from multiple programs
• Parallel running using TestNg/JUnit
Automation Framework development in Agile testing
• Introduction to Frame W
"To be tested a system has to be designed to be tested"
Eberhardt Rechtin, The Art Of System Architecting
Testing is one of the main activities through which we gather data to assess the quality of our software; this makes testability an important attribute of software--not only for development, but also for maintenance and bug fixing.
Design for testability is a term that has its origin in hardware design, where the concept was introduced in order to make it easier testing circuits while reducing the costs of doing so.
In this talk I'll show how to translate this concept to the software domain along with the consequences on various aspects of the development activities, both from the technical point of view (e.g., design, code quality, choice of frameworks, etc.), and the product management point of view (e.g., management of team dependencies, delivery time, costs, etc.). I'll provide examples based on real world experience, both for the technical and the management aspects.
Many organizations never achieve the significant benefits that are promised from automated test execution. What are the secrets to test automation success? There are no secrets, but the paths to success are not commonly understood. Dorothy Graham describes the most important automation issues that you must address, both management and technical, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use. If you don’t begin with good objectives for your automation, you will set yourself up for failure later. If you don’t show return on investment (ROI) from automation, your automation efforts may be doomed, no matter how technically good they are. Join Dot to learn how to identify achievable and realistic objectives for automation, show ROI from automation, understand technical issues including testware architecture, pick up useful tips, learn what works in practice, and devise an effective automation strategy.
Organizations worldwide collect data about customers, users, products, and services. Striving to get the most out of collected data, they use it to fuel many day-to-day processes including software testing, development, and personnel training. The majority of this collected data is sensitive and falls under specific government regulations or industry standards that define policies for privacy and generally limit or prohibit using the data for these secondary purposes. Data masking solves this problem. It replaces sensitive information with data that looks real and is structurally similar to the actual information but is useless to anyone trying to obtain the real data. Learn about the process, pros and cons of static and dynamic data masking architectures, subsetting, randomization, generalization, shuffling, and other basic techniques used to set up data masking. Discover how to start data masking and learn about common challenges on data masking projects.
In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters.
La charla está enfocada en una herramienta de análisis de código estático, la cuál se encuentra en desarrollo actualmente, enfocada específicamente en la búsqueda de vulnerabilidades, en vez de centrarse en errores típicos de programación como las más populares herramientas de análisis de código tales como Coverity o Klockwork. Durante el transcurso de la misma se irá dando toda la base necesaria para entender el funcionamiento de estas herramientas, la diferencia entre herramientas para buscar bugs y vulnerabilidades así como la parte que el ponente considera fundamental de dar interactividad a este tipo de herramientas.
Al final de la charla se mostrará una pequeña demo de la herramienta actual y algunos fallos/vulnerabilidades encontrados gracias a la misma.
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...TEST Huddle
Acceptance test driven development (ATDD) is an important agile practice merging requirement gathering with acceptance testing. In its core are concrete examples, created together with the team, that provide collaborative understanding and, as automated acceptance tests, make sure that the features are implemented correctly. There are many ways to create ATDD examples/tests, and the behavior driven development (BDD) style with Given-When-Then format is one of the more popular ones.
Robot Framework is an open source test automation framework suitable for ATDD and acceptance testing in general. It has a flexible test data syntax that supports keyword-driven, data-driven, and BDD styles, but is still simple enough so that also non-programmers can create and understand test cases. The simple test library API makes extending the framework easy, and there are several ready made libraries that allow testing generic interfaces such as web, databases, Swing, SWT, Windows GUIs, Flex, and SSH out-of-the-box.
This presentation gives an introduction both to ATDD and Robot Framework. It contains different demonstrations and
all the material will be freely available after the presentation.
Continuous Automated Regression Testing to the RescueTechWell
A major concern when developing new software features is that another part of the code will be affected in unexpected ways. With a typical development processes, testers often do not run a full set of product regression tests until late in the release when it is much more costly to fix and retest the product. Continuous automated regression testing to the rescue! Brenda Kise describes the team, project, and organization value and benefits of continuously performing automated regression tests throughout the development process. Brenda explains how this practice saves time and money in the long-run because the team and stakeholders gain an ongoing understanding of the quality of the code base every time a new build becomes available. Brenda describes the different approaches for introducing the practices of continuous automated regression testing into your organization. Find out how to create your immediate feedback mechanism to highlight the new code that creates regression defects.
Testing- Fundamentals of Testing-Mazenet solutionMazenetsolution
For Youtube Videos: bit.do/sevents
Why testing is necessary,Fundamental test process, Psychology of testing, Re-testing and regression testing,
Expected results,Prioritisation of tests
Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. ... The high complexity of software is the major contributing factor of Software Reliability problems.
The ultimate objective of a DevOps approach is to deliver quality products to your customers as efficiently as possible. DevOps shops that achieved this state point to continuous testing as a key contributor to their success. However, QA and testing have become forgotten orphans in the DevOps journey of many organizations. For groups that have incorporated testing, many have a release cadence that resembles something more like waterfall. The culprit is often the inability to incorporate stable automation into their testing practices.
In this talk, Lee will discuss how organizations can address these issues, and move towards continuous testing within their DevOps practices. Specifically, the discussion will touch on key practices and methods for implementing effective test automation in a DevOps pipeline including test suite & test scope, automation approach/methods, and test environment and data management.
The Art of Testing Less without Sacrificing Quality @ ICSE 2015Kim Herzig
Testing is a key element of software development processes for the management and assessment of product quality. In most development environments, the software engineers are responsible for ensuring the functional correctness of code. However, for large complex software products, there is an additional need to check that changes do not negatively impact other parts of the software and they comply with system constraints such as backward compatibility, performance, security etc. Ensuring these system constraints may require complex verification infrastructure and test procedures. Although such tests are time consuming and expensive and rarely find defects they act as an insurance process to ensure the software is compliant. However, long lasting tests increasingly conflict with strategic aims to shorten release cycles. To decrease production costs and to improve development agility, we created a generic test selection strategy called THEO that accelerates test processes without sacrificing product quality. THEO is based on a cost model, which dynamically skips tests when the expected cost of running the test exceeds the expected cost of removing it. We replayed past development periods of three major Microsoft products resulting in a reduction of 50% of test executions, saving millions of dollars per year, while maintaining product quality.
Automated Software Testing Framework Training by Quontra SolutionsQuontra Solutions
Learn through Experience -- We differentiate our training and development program by delivering Role-Based training instead of Product-based training. Ultimately, our goal is to deliver the best IT Training to our clients.
In this training, attendees learn:
Introduction to Automation
• What is automation
• Advantages of automation & Disadvantages of automation
• Different types of Automation Tools
• What to automate in projects
• When to start automation. Scope for automation testing in projects
• About open-source automation tools
Introduction to Selenium
• What is selenium
• Why selenium
• Advantage and Disadvantages of selenium
Selenium components
• Selenium IDE
• Selenium RC
• Selenium WebDriver
• Selenium Grid
Selenium IDE
• Introduction to IDE
• IDE Installation
• Installation and uses of Firepath, Firebug & Debug bar
• Property & value of elements
• Selenium commands
• Assertions & Verification
• Running, pausing and debugging script
• Disadvantages of selenium IDE
• How to convert selenium IDE Scripts into other languages
Locators
• Tools to identify elements/objects
• Firebug
• IE Developer tools
• Google Chrome Developer tools
• Locating elements by ID
• Finding elements by name
• Finding elements by link text
• Finding elements by XPath
• Finding Elements by using CSS
• Summary
Selenium RC
• What is selenium RC
• Advantages of RC, Architecture
• What is Eclipse/IntelliJ, Selenium RC configure with Eclipse/IntelliJ
• Creating, running & debugging RC scripts
Java Concepts
• Introduction to OOPs concepts and Java
• Installation: Java, Eclipse/IntelliJ, selenium, TestNg/JUnit
• operators in java
• Data types in java
• Conditional statements in java
• Looping statements in java
• Output statements in java
• Classes & Objects
• Collection Framework
• Regular Expressions
• Exception Handling
• Packages, Access Specifiers /Modifiers
• String handling
• Log4J for logging
Selenium Web Driver with Java
• Introduction to WebDriver
• Advantages
• Different between RC and WebDriver
• Selenium WebDriver- commands
• Generate scripts in Eclipse/IntelliJ. Run Test Scripts.
• Debugging Test Script
• Database Connections
• Assertions, validations
• Working with Excel
• Pass the data from Excel
• Working with multiple browser
• Window Handling, Alert/confirm & Popup Handling
• Mouse events
• Wait mechanism
• Rich Web Handling: Calendar handing, Auto suggest, Ajax, browser forward/back navigation, keyboard events, certificate handling, event listeners
TestNg/JUnit Framework
• What is TestNg/JUnit
• Integrate the Selenium Scripts and Run from TestNg/JUnit
• Reporting Results and Analysis
• Run Scripts from multiple programs
• Parallel running using TestNg/JUnit
Automation Framework development in Agile testing
• Introduction to Frame W
"To be tested a system has to be designed to be tested"
Eberhardt Rechtin, The Art Of System Architecting
Testing is one of the main activities through which we gather data to assess the quality of our software; this makes testability an important attribute of software--not only for development, but also for maintenance and bug fixing.
Design for testability is a term that has its origin in hardware design, where the concept was introduced in order to make it easier testing circuits while reducing the costs of doing so.
In this talk I'll show how to translate this concept to the software domain along with the consequences on various aspects of the development activities, both from the technical point of view (e.g., design, code quality, choice of frameworks, etc.), and the product management point of view (e.g., management of team dependencies, delivery time, costs, etc.). I'll provide examples based on real world experience, both for the technical and the management aspects.
Many organizations never achieve the significant benefits that are promised from automated test execution. What are the secrets to test automation success? There are no secrets, but the paths to success are not commonly understood. Dorothy Graham describes the most important automation issues that you must address, both management and technical, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use. If you don’t begin with good objectives for your automation, you will set yourself up for failure later. If you don’t show return on investment (ROI) from automation, your automation efforts may be doomed, no matter how technically good they are. Join Dot to learn how to identify achievable and realistic objectives for automation, show ROI from automation, understand technical issues including testware architecture, pick up useful tips, learn what works in practice, and devise an effective automation strategy.
Organizations worldwide collect data about customers, users, products, and services. Striving to get the most out of collected data, they use it to fuel many day-to-day processes including software testing, development, and personnel training. The majority of this collected data is sensitive and falls under specific government regulations or industry standards that define policies for privacy and generally limit or prohibit using the data for these secondary purposes. Data masking solves this problem. It replaces sensitive information with data that looks real and is structurally similar to the actual information but is useless to anyone trying to obtain the real data. Learn about the process, pros and cons of static and dynamic data masking architectures, subsetting, randomization, generalization, shuffling, and other basic techniques used to set up data masking. Discover how to start data masking and learn about common challenges on data masking projects.
Many organizations struggle with transforming from the old-style specialized silos of skills into agile teams with generalized specialists. Without this pivot, we get sub-optimal agile/Scrum environments. Howard Deiner describes what can go wrong when integrating testers properly into an agile organization and how to fix that. Without a proper agile mindset, an organization will “revert to form” and return to their old practices after a frustrating failure to adapt agile. Howard examines the real role of testers in the organization and identifies where they truly add value in the production of quality code. He speaks frankly about the skills that agile testers must master and the issues that organizations have that complicate testers’ lives. Finally, Howard discusses exactly what testers need to do to add value to the software development process and how they integrate in the DevOps model that is a contemporary solution to an age old issue.
Zorro Circles: Retrospectives for ExcellenceTechWell
Have you wondered how to progressively harness your agile team’s energy, focus on important goals, and improve outcomes? Woody Zuill said, “If you could adopt only one agile practice, then let it be retrospectives. Everything else will follow.” Retrospectives help individuals and teams adjust to today’s constant change and establish a sustainable pace to deliver complex products. Zorro Circles is a framework for designing retrospectives that employs proven techniques to gather and analyze information required to collectively solve problems. Aakash Srinivasan and Vivek Angiras introduce the four key steps—Pave the Way, Develop a Shared Understanding, Know What to Accomplish, and Executable Actions—to using Zorro Circles to create powerful retrospectives. Discover the essentials of designing retrospectives by understanding the multilayered circles of projects’ and people’s characteristics. Take away new techniques to design moments of inspection and adaptation in your organization’s ecosystem and enable excellence in execution, behaviors, and tactics.
In today's economy, the Creative Economy, businesses face a disrupted, highly competitive, and constantly changing landscape. Robie and Jody Wood say that to thrive in the Creative Economy, team members, managers, and executives need to become and remain agile. Improvisational theater provides a proven model for developing agility skills since the characteristics of “being agile”—engaging people, learning, making decisions in the midst of uncertainty and ambiguity, and adapting—are the very skills that improv artists work to develop with every exercise they perform. This session is about “Being Agile,” developing the mindset and behaviors that grow great abilities in communication, collaboration, inspiring others, building on others’ ideas, learning, adapting, and evolving. This workshop will engage delegates in experiential learning exercises from Improvisational Theater that will have immediate impact in improving agile mindset and behavior. The workshop participants will find the exercises lively, inspiring, fun, life changing—and an experience they will never forget.
STARCANADA 2013 Keynote: Cool! Testing’s Getting Fun AgainTechWell
The last exciting era in testing was in the late ‘90s when the web turned technology on its ear, the agile movement overthrew our processes, and the rise of open source gave us accessible and innovative tools. However, since then, Jonathan Kohl finds it has been a lot of the same-old, same-old: more web applications, variants of agile, and testing tool debates. After a while, you start to feel like you’ve “Been there, Done that, Got that t-shirt.” However, testing doesn’t have to be a rehash of all the things you’ve done before. It is an exciting time to be a tester! Major changes are happening all at once. Smartphones and tablets have upended computing. Continuous release movements like DevOps are challenging our processes. BigData, the Cloud, and other disruptive technologies are fostering the development of new tools. These innovations are pushing testers in new directions. While this upheaval is challenging, it also offers enormous opportunity for growth. Jonathan challenges you to embrace these changes—and have fun doing it. Discover today’s amazing new opportunities, and how we can learn, adapt, and ride the wave.
What Hollywood Can Teach Us about Software TestingTechWell
If we observe the world through the lens of software testing, we discover that there are lessons all around us we can apply on the job, and one venue that’s packed with these tidbits is the movie theater. Bernie Berger gives examples of a few unlikely yet credible lessons from the language of some classic Hollywood movies—and how we can apply them as we test. Learn about the primary root cause of many software bugs from Office Space; see how the characters in Saving Private Ryan enacted ideas of exploratory testing; discover how to apply the skills of situational awareness from Sneakers; learn about the context of discovery from All the President’s Men; and see a great example of inattentional blindness from Star Trek II: The Wrath of Khan. In addition to learning how these concepts are fundamentally related to testing software, learn how to apply lessons from your everyday life to testing challenges.
Many teams have a relatively easy time adopting the tactical aspects of agile methodologies. Usually a few classes, some tools introduction, and a bit of practice lead teams toward a fairly efficient and effective adoption. However, these teams often get “stuck” and begin to regress or simply start going through the motions—neither maximizing their agile performance nor delivering as much value as they could. Borrowing from his experience and lean software development methods, Bob Galen examines essential patterns—the thinking models of mature agile teams—so you can model them within your own teams. Along the way, you’ll examine patterns for large-scale emergent architecture, relentless refactoring, quality on all fronts, pervasive product owners, lean work queues, providing total transparency, saying No, and many more. Bob also explores why there is still the need for active and vocal leadership in defending, motivating, and holding agile teams accountable.
Twelve Heuristics for Solving Tough Problems Faster and BetterTechWell
As infants, we begin our lives as problem-solving machines, learning to navigate a strange and complex world in which others communicate in ways we don’t understand. Initially, we hone our problem-solving talents. Then many of us find our explorations thwarted and eventually stop using—and then begin losing—our natural problem-solving ability. It doesn’t have to be that way. Psychologists tell us that people can regain lost skills and learn new ones to become better problem solvers. Payson Hall shares techniques and skills that apply to situations in real life. Specifically, learn techniques to better define problems and explore twelve heuristics for generating solutions that can help when you and your team are staring at a blank paper, struggling to find candidate solutions for further consideration. Learn when random search is appropriate, how binary search can help with diagnostics, strategies for identifying and overcoming opposition, and when transferring the problem to someone else just might be the best strategy of all.
User Stories: Across the Seven Product DimensionsTechWell
User stories are a powerful technique agile teams used to communicate requirements. Yet all too often, the stories are poorly written or even incomprehensible. Some stories are too big and overlap across delivery cycles. Others are too small and don’t deliver sufficient details for developers. Join Paul Reed to learn the Seven Product Dimensions—the 7 D’s—which yield “just right” stories that users and product owners can write and developers can understand. Explore and experience the Seven Dimensions: user, interface, action, data, control, quality, and environment. Learn to identify options for high value business and user needs, and then assemble them into cohesive user stories. As you slice the options, you’ll see ways to leverage analysis models to quickly visualize and discuss options. Practice employing structured conversations about user stories to engage customers while taking business and technology perspectives into consideration. Find out how to establish acceptance criteria based on the value considerations to make your stories more valuable. Leave with a practical framework for writing “just right” stories.
Performance Testing Web 2.0 Applications—in an Agile WorldTechWell
Agile methodologies bring new complexities and challenges to traditional performance engineering practices, especially with Web 2.0 technologies that implement more and more functionality on the client side. Mohit Verma presents a Scrum-based performance testing lifecycle for Web 2.0 applications. Mohit explains when performance engineers need to participate in the project, discusses how important it is for the performance engineer to understand the technical architecture, and explores the importance of testing early to identify design issues. Find out how to create the non-functional requirements that are critical for building accurate and robust performance test scenarios. Learn how to implement practices for continuous collaboration between test engineers and developers to help identify performance bottlenecks early. Learn about the tools available today to help you address the testing and tuning of your Web 2.0 applications. Leave with a new appreciation and new approaches to ensure that your Web 2.0-based applications are ready for prime time from day one.
Model-Based Testing: Concepts, Tools, and TechniquesTechWell
For decades, software development tools and methods have evolved with an emphasis on modeling. Standards like UML and SysML are now used to develop some of the most complex systems in the world. However, test design remains a largely manual, intuitive process. Now, a significant opportunity exists for testing organizations to realize the benefits of modeling. Adam Richards describes how to leverage model-based testing to dramatically improve both test coverage and efficiency—and lower the overall cost of quality. Adam provides an overview of the basic concepts and process implications of model-based testing, including its role in agile. A survey of model types and techniques shows different model-based solutions for different kinds of testing problems. Explore tool integrations and weigh the pros and cons of model-based test development against a variety of system and project-level factors. Gain a working knowledge of the concepts, tools, and techniques needed to introduce model-based testing to your organization.
Accessibility Standards and Testing Techniques: Be Inclusive or Be Left BehindTechWell
While Information and Communication Technology (ICT) accessibility for a wider spectrum of users—including the blind—and their interfaces is being required by law across more jurisdictions, testing for it remains limited, naïve, and too late. The consequences of staying ignorant include increased exposure to litigation, penalties, and loss of contracts and revenue. Join David Best, Sandy Feldman, and Rob Harvie to learn why accessibility is now becoming a valued, integral part of the design process and much different from usability of twenty years ago. Ensure compliance for your organization and clients by familiarizing yourself with the regional and international standards and their criteria, and find out what testing tools and inclusive design practices you can use. Take away an understanding of the three core guidelines for accessibility; components of authoring tools, web content, and user agent accessibility for mobile, web browsers, and media players—and understand their impact on assistive technologies.
It seems impossible for a DevOps team to even attempt planning its work. The team deals with customers’ never-ending requests and constantly-changing priorities. And don’t forget those unfriendly infrastructure errors that always seem to show up at the worst possible time. Best to live day-by-day and try to keep our heads above water, right? Maybe not. Through a case study of a department that helps make great Electronic Arts games, Ben Vining illustrates how a metrics-driven DevOps team can become reliable, responsive, and predictable—with happy staff and delighted customers. Ben details a process of short sprints tied into longer milestones and regular communication with partner teams, which allows his team to successfully plan their work across a number of different projects. Along the way, Ben demonstrates how tracking metrics related to his team’s work and the reliability of their systems help drive this process. Ben hopes to inspire you to try these ideas, improve the health of your team, and increase the value you add to your customers.
Combinatorial Black-Box Testing with Classification TreesTechWell
A basic problem in software testing often is choosing a subset from the near infinite number of possible test cases. Consider the challenges of testing multiple browsers, multiple mobile devices, mobile applications, or use case paths. Testers must select test cases to design, create, and then execute to obtain sufficient coverage—all while managing the time it takes to test relative to risks. Even though test resources are limited, you still want to select the best possible set of tests. Peter Kruse shares his experiences designing test cases with TESTONA, the most popular tool for systematic test design of classification tree-based tests. Peter shows how to integrate expected test outcomes and how to obtain executable test scripts directly from the test specification or user stories. If you are looking to jumpstart your systematic test design and want to avoid unnecessary tests and overhead, this session is for you!
5 Steps to Jump Start Your Test AutomationSauce Labs
With the acceleration of software creation and delivery, test activities must align to the new tempo. Developers need immediate feedback to be efficient and correct defects as those are introduced. The path to achieving this vision is to build a reliable and scalable continuous test solution.
All beginnings are hard. Having a well-defined plan outlining the approach for your organization to create test automation is key to ensure long term success. Join Diego Molina, Senior Software Engineer at Sauce Labs as he discusses:
The importance of setting up the team correctly from the start
Choosing the right Testing Framework for your organization
Identifying the right scenarios and workflows to test
Learning to avoid common pitfalls at the beginning of the transformation journey
Is your company thinking about using Selenium to implement test automation in a joint development and operations environment? If your company has already started using Selenium, have you experienced execution or integration challenges? The path to a well-oiled and successful Selenium test automation program comes down to using the right techniques and development standards that incorporate modularity and flexibility. Jin Reck describes how to design effective web test automation development, and shares common challenges and solutions when implementing an automated testing framework in the real world. Jin shows how to incorporate Selenium with continuous integration platforms and discusses techniques, adjustments, lessons learned, and best practices from successful implementations. Leave with a better understanding of how to design and employ Selenium to create robust and reliable automated tests that increase the efficiency and productivity of test teams and make for a capable and successful testing program.
Software Testing Presentation in Cegonsoft Pvt Ltd...ChithraCegon
The process of executing and verifying whether the application or a program or system meets the customer requirements with the intent of finding errors.
Software Characterization & Performance Testing - Beat Your Software with a S...Tze Chin Tang
Traditional software testing is centered around "if it works." Given the move to the cloud and large scaled systems, the cost of failure has gotten high enough, they knowing "if it works" is no longer enough, but we need to know "how it fails."
This talk is an introduction to software characterization testing.
I shared this at the Agile Software Testing Summit Malaysia 2016.
Arthur Hicken Chief Evangelist of Parasoft @ PSQT 2016 discusses:
• What the shift from automated to
continuous means
• How disruption requires changes to how
we test software
• Addressing gaps between Dev and Ops
• Technologies that enable Continuous
Manual Testing tutorials and Interview Questions.pptxPrasanta Sahoo
Learn Manual Testing and appear interviews.
Collection of Interview Questions from Software. We cover SDLC and STLC for Beginners.
Definition and example wise all test methodologies.
Do you ever feel you have lost confidence in your own abilities? Why does this happen? Isabel Evans spends a lot of time painting. Someone once commented, “Why are you doing this, when you are not very good at it?” And gradually she stopped drawing and painting, after being intimidated by a conventional vision of what good art should look like. At the same time, she experienced a parallel loss of confidence in her professional abilities. Attempting creative pursuits like drawing and painting is essential to cognitive, emotional, creative abilities and she began to understand the correlation between her creative activities and her confidence. Making errors, being wrong, failing – that is a generous gift we receive when we practice outside our skill level. By staying in a comfort zone and repeating successes, we stagnate. As Isabel started to create again she thought “I don’t feel good at it, I do feel good doing it” The difference was that she was learning, having ideas and the act of re-engaging with failure, together with the comradeship of friends and colleagues, including at Women Who Test, Isabel has regained her confidence in her professional abilities, and been able to reboot her career and joy. Join Isabel to share a journey from self-perceived failure, to recovery and renewed learning.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
Test Design for Fully Automated Build ArchitectureTechWell
Imagine this … As soon as any developed functionality is submitted into the code repository, it is automatically subjected to the appropriate battery of tests and then released straight into production. Setting up the pipeline capable of doing just that is becoming more and more common and something you need to know about. But most organizations hit the same stumbling block—just what IS the appropriate battery of tests? Automated build architectures don't always lend themselves well to the traditional stages of testing. In this hands-on tutorial, Melissa Benua introduces you to key test design principles—applicable to organizations both large and small—that allow you to take full advantage of the pipeline's capabilities without introducing unnecessary bottlenecks. Learn how to make highly reliable tests that run fast and preserve just enough information to let testers and developers determine exactly what went wrong and how to reproduce the error locally. Explore ways to reduce overlap while still maintaining adequate test coverage. Take back ideas about which test areas could benefit from being combined into a single suite and which areas could benefit most from being broken out altogether.
System-Level Test Automation: Ensuring a Good StartTechWell
Many organizations invest a lot of effort in test automation at the system level but then have serious problems later on. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing value? This tutorial covers both “theory” and “practice”. Dot Graham explains the critical issues for getting a good start, and Chris Loder describes his experiences in getting good automation started at a number of companies. The tutorial covers the most important management issues you must address for test automation success, particularly when you are new to automation, and how to choose the best approaches for your organization—no matter which automation tools you use. Focusing on system level testing, Dot and Chris explain how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts to promote success, what you can realistically expect in benefits and how to report them. They explain—for non-techies—the key technical issues that can make or break your automation effort. Come away with your own clarified automation objectives, and a draft test automation strategy to use to plan your own system-level test automation.
Build Your Mobile App Quality and Test StrategyTechWell
Let’s build a mobile app quality and testing strategy together. Whether you have a web, hybrid, or native app, building a quality and testing strategy means (1) knowing what data and tools you have available to make agile decisions, (2) understanding your customers and your competitors, and (3) testing your app under real-world conditions. Jason Arbon guides you through the latest techniques, data, and tools to ensure the awesomeness of your mobile app quality and testing strategy. Leave this interactive session with a strategy for your very own app—or one you pretend to own. The information Jason shares is based on data from Appdiff’s next-gen mobile app testing platform, lessons from Applause/uTest’s crowd, text mining hundreds of millions of app store reviews, and in-depth discussions with top mobile app development teams.
Testing Transformation: The Art and Science for SuccessTechWell
Technologies, testing processes, and the role of the tester have evolved significantly in the past few years with the advent of agile, DevOps, and other new technologies. It is critical that we testing professionals evaluate ourselves and continue to add tangible value to our organizations. In your work, are you focused on the trivial or on real game changers? Jennifer Bonine describes critical elements that help you artfully blend people, process, and technology to create a synergistic relationship that adds value. Jennifer shares ideas on mastering politics, maneuvering core vs. context, and innovating your technology strategies and processes. She explores how new processes can be introduced in an organization, what the role of organizational culture is in determining the success of a project, and how you can know what tools will add value vs. simply adding overhead and complexity. Jennifer reviews critically needed tester skills and discusses a continual learning model to evolve your skills and stay relevant. This discussion can lead you to technologies, processes, and skills you can stake your career on.
We’ve all been there. We work incredibly hard to develop a feature and design tests based on written requirements. We build a detailed test plan that aligns the tests with the software and the documented business needs. And when we put the tests to the software, it all falls apart because the requirements were changed without informing everyone. Mary Thorn says help is at hand. Enter behavior-driven development (BDD), and Cucumber and SpecFlow, tools for running automated acceptance tests and facilitating BDD. Mary explores the nuances of Cucumber and SpecFlow, and shows you how to implement BDD and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber and SpecFlow bridge the communication gap between business stakeholders and implementation teams. In this workshop, practice writing feature files with the best practices Mary has discovered over numerous implementations. If you experience developers not coding to requirements, testers not getting requirements updates, or customers who feel out of the loop and don’t get what they ask for, Mary has answers for you.
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
Many teams go crazy because of brittle, high-maintenance automated test suites. Jim Holmes helps you understand how to create a flexible, maintainable, high-value suite of functional tests using Selenium WebDriver. Learn the basics of what to test, what not to test, and how to avoid overlapping with other types of testing. Jim includes both philosophical concepts and hands-on coding. Testers who haven't written code should not be intimidated! We'll pair you up to make sure you're successful. Learn to create practical tests dealing with advanced situations such as input validation, AJAX delays, and working with file downloads. Additionally, discover when you need to work together with developers to create a system that's more easily testable. This tutorial focuses primarily on automating web tests, but many of the same concepts can be applied to other UI environments. Demos and labs will be in C# and Java using WebDriver. Leave this tutorial having learned how to write high-value WebDriver tests—and stay sane while doing so.
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
Chris Parlette maintains that renting infrastructure on demand is the most disruptive trend in IT in decades. In 2016, enterprises spent $23B on public cloud IaaS services. By 2020, that figure is expected to reach $65B. The public cloud is now used like a utility, and like any utility, there is waste. Who's responsible for optimizing the infrastructure and reducing wasted expenses? It’s DevOps. The excess expense, known as cloud waste, comprises several interrelated problems: services running when they don't need to be, improperly sized infrastructure, orphaned resources, and shadow IT. There are a few core tenets of DevOps—holistic thinking, no silos, rapid useful feedback, and automation—that can be applied to reducing your cloud waste. Join Chris to learn why you should include continuous cost optimization in your DevOps processes. Automate cost control, reduce your cloud expenses, and make your life easier.
Transform Test Organizations for the New World of DevOpsTechWell
With the recent emergence of DevOps across the industry, testing organizations are being challenged to transform themselves significantly within a short period of time to stay meaningful within their organizations. It’s not easy to plan and approach these changes considering the way testing organizations have remained structured for ages. These challenges start from foundational organizational structures and can cut across leadership influence, competencies, tools strategy, infrastructure, and other dimensions. Sumit Kumar shares his experience assisting various organizations to overcome these challenges using an organized DevOps enablement framework. The framework includes radical restructuring, turning the tools strategy upside down, a multidimensional workforce enablement supported by infrastructure changes, redeveloped collaborations models, and more. From his real world experiences Sumit shares tips for approaching this journey and explains the roadmap for testing organizations to transform themselves to lead the quality in DevOps.
The Fourth Constraint in Project Delivery—LeadershipTechWell
All too often, the triple constraints—time, cost, and quality—are bandied about as if they are the be-all, end-all. While they are important, leadership—the fourth and larger underpinning constraint—influences the first three. Statistics on project success and failure abound, and these measurements are usually taken against the triple constraints. According to the Project Management Institute, only 53 percent of projects are completed within budget, and only 49 percent are completed on time. If so many projects overrun budget and are late, we can’t really say, “Good, fast, or cheap—pick two.” Rob Burkett talks about leadership at every level of a team. He shares his insights and stories gleaned from his years of IT and project management experience. Rob speaks to some of the glaring difficulties in the workplace in general and some specifically related to IT delivery and project management. Leave with a clearer understanding of how to communicate with teams and team members, and gain a better understanding of how you can be a leader—up and down your organization.
Resolve the Contradiction of Specialists within Agile TeamsTechWell
As teams grow, organizations often draw a distinction between feature teams, which deliver the visible business value to the user, and component teams, which manage shared work. Steve Berczuk says that this distinction can help organizations be more productive and scale effectively, but he recognizes that not all shared work fits into this model. Some work is best handled by “specialists,” that is people with unique skills. Although teams composed entirely of T-shaped people is ideal, certain skills are hard to come by and are used irregularly across an organization. Since these specialists often need to work closely with teams, rather than working from their own backlog, they don’t fit into the component team model. The use of shared resources presents challenges to the agile planning model. Steve Berczuk shares how teams such as those providing infrastructure services and specialists can fit into a feature+component team model, and how variations such as embedding specialists in a scrum team can both present process challenges and add significant value to both the team and the larger organization.
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
Metrics don’t have to be a necessary evil. If done right, metrics can help guide us to make better forward-looking decisions, rather than being used for simply managing or monitoring. They can help us identify trade-offs between options for what to do next versus punitive or worse, purely managerial measures. Steve Martin won’t be giving the Top Ten List of field-tested metrics you should use. Instead, in this interactive mini-workshop, he leads you through the critical thinking necessary for you to determine what is right for you to measure. First, Steve explores why you want to measure something—whether it’s for a team, a portfolio, or even an agile transformation. Next, he provides multiple real-life metrics examples to help drive home concepts behind characteristics of good and bad metrics. Finally, Steve shows how to run his field-tested agile game—Pin the Tail on the Metric. Take back this activity to help you guide metrics conversations at your organization.
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
A hierarchy is an organizational network that has a top and a bottom, and where position is determined by rank, importance, and value. A holarchy is a network that has no top or bottom and where each person’s value derives from his ability, rather than position. As more companies seek the benefits of agile, leaders need to build and sustain delivery capability while scaling agile without introducing unnecessary process and overhead. The Agile Performance Holarchy (APH) is an empirical model for scaling and sustaining agility while continuing to deliver great products. Jeff Dalton designed the APH by drawing from lessons learned observing and assessing hundreds of agile companies and teams. The APH helps implement a holarchy—a system composed of interacting organizational units called holons—centered on a series of performance circles that embody the behaviors of high performing agile organizations. Jeff describes how APH provides guidelines in the areas of leadership, values, teaming, visioning, governing, building, supporting, and engaging within an all-agile organization. Join Jeff to see what the APH is all about and how you can use it in your team and organization.
A Business-First Approach to DevOps ImplementationTechWell
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Databases in a Continuous Integration/Delivery ProcessTechWell
DevOps is transforming software development with many organizations adopting lean development practices, implementing continuous integration (CI), and performing regular continuous deployment (CD) to their production environments. However, the database is largely ignored and often seen as a bottleneck in the DevOps process. Steve Jones discusses the challenges of database development and why many developers find the database to be an impediment to the CD process. Steve shares the techniques you can use to fit a database into the DevOps process. Learn how to store database code in a version control system, and the differences between that and application code. Steve demonstrates a CI process with SQL code and uses automated testing frameworks to check the code. Steve then shows how automated releases with manual gates can reduce the stress and risk of database deployments while ensuring consistent, reliable, repeatable releases to QA, UAT, and production.
Mobile Testing: What—and What Not—to AutomateTechWell
Organizations are moving rapidly into mobile technology, which has significantly increased the demand for testing of mobile applications. David Dangs says testers naturally are turning to automation to help ease the workload, increase potential test coverage, and improve testing efficiency. But should you try to automate all things mobile? Unfortunately, the answer is not always clear. Mobile has its own set of complications, compounded by a wide variety of devices and OS platforms. Join David to learn what mobile testing activities are ripe for automation—and those items best left to manual efforts. He describes the various considerations for automating each type of mobile application: mobile web, native app, and hybrid applications. David also covers device-level testing, types of testing, available automation tools, and recommendations for automation effectiveness. Finally, based on his years of mobile testing experience, David provides some tips and tricks to approach mobile automation. Leave with a clear plan for automating your mobile applications.
Cultural Intelligence: A Key Skill for SuccessTechWell
Diversity is becoming the norm in everyday life. However, introducing global delivery models without a proper understanding of intercultural differences can lead to difficulty, frustration, and reduced productivity. Priyanka Sharma and Thena Barry say that in our diverse world, we need teams with people who can cross these boundaries, communicate effectively, and build the diverse networks necessary to avoid problems. We need to learn about cultural intelligence (CI) and cultural quotient (CQ). CI is the ability to relate and work effectively across cultures. CQ is the cognitive, motivational, and behavioral capacity to understand and respond to beliefs, values, attitudes, and behaviors of individuals and groups. Together, CI and CQ can help us build behavioral capacities that aid motivation, behavior, and productivity in teams as well as individuals. Priyanka and Thena show how to build a more culturally intelligent place with tools and techniques from Leading with Cultural Intelligence, as well as content from the Hofstede cultural model. In addition, they illustrate the model with real-life experiences and demonstrate how they adapted in similar circumstances.
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
Why would a century-old utility with no direct competitors take on the challenge of transforming its entire IT application organization to an agile methodology? In an increasingly interconnected world, the expectations of customers continue to evolve. From smart meters to smart phones, IoT is creating a crisis point for industries not accustomed to rapid change. Glen Morris explains that pizzas can be tracked by the minute and packages at every stop, and customers now expect this same customer service model should exist for all industries—including power. Glen examines how to create momentum and transform non-IT-focused industries to an agile model. If you are struggling with gaining traction in your pursuit of agile within your business, Glen gives you concrete, practical experiences to leverage in your pursuit. Finally, he communicates how to gain buy-in from business partners who have no idea or concern about agile or its methodologies. If your business partners look at you with amusement when you mention the need for a dedicated Product Owner, join Glen as he walks you through the approaches to overcoming agile skepticism.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
1. TQ
PM Tutorial
4/30/13 1:00PM
How to Actually DO High-volume
Automated Testing
Presented by:
Cem Kaner and Carol Oliver
Florida Institute of Technology
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
2. Cem Kaner
Cem Kaner is a professor of software engineering at Florida Institute of Technology and director of FIT’s
Center for Software Testing Education & Research. Cem teaches and researches in software
engineering—software testing, software metrics, and computer law and ethics. In his career, he has
studied and worked in areas of psychology, law, programming, testing, technical writing, sales, applying
what he learned to the central problem of software satisfaction. Cem has done substantial work on the
development of the law of software quality. He coauthored Testing Computer Software, Lessons Learned
in Software Testing, and Bad Software: What To Do When Software Fails.
Carol Oliver
Carol Oliver is a doctoral student in the Center and leader of the HiVAT project. She has more than a
decade of experience as a tester, most recently as Lead QA Analyst supporting infrastructure technology
services at Stanford University. For most of her career, Carol has been the lone tester supporting more
than a dozen database-driven middleware and web applications, all of which needed to cooperate with
each other and several different infrastructure services simultaneously. Carol is very interested in testing
techniques that help examine complex question spaces more effectively and in how these techniques can
be applied to maximize human attention to the holistic picture.
3. How to Actually DO High-Volume
Automated Testing
Cem Kaner
Carol Oliver
Mark Fioravanti
Florida Institute of Technology
1
4. Presentation Abstract
In high volume automated testing (HiVAT), the test tool generates the
test, runs it, evaluates the results, and alerts a human to suspicious
results that need further investigation.
Simple HiVAT approaches use simple oracles—run the program until it
crashes or fails in some other extremely obvious way.
More powerful HiVAT approaches are more sensitive to more types of
errors. They are particularly useful for testing combinations of many
variables and for hunting hard-to-replicate bugs that involve timing or
corruption of memory or data.
Cem Kaner, Carol Oliver, and Mark Fioravanti present a new strategy
for teaching HiVAT testing. We’ve been creating open source examples
of the techniques applied to real (open source) applications. These
examples are written in Ruby, making the code readable and reusable
by snapping in code specific to your own application. Join Cem Kaner,
Carol Oliver, and Mark Fioravanti as they describe three HiVAT
techniques, their associated code, and how you can customize them.
2
5. Acknowledgments
• Our thanks to JetBrains (jetbrains.com), for supporting this
work through an Open Source Project License to their Ruby
language IDE, RubyMine.
• These notes are partially based on research supported by NSF
Grant CCLI-0717613 “Adaptation & Implementation of an
Activity-Based Online or Hybrid Course in Software Testing.”
Any opinions, findings and conclusions or recommendations
expressed in this material are those of the author(s) and do
not necessarily reflect the views of the National Science
Foundation.
3
6. What is automated testing?
• There are many testing activities, such as…
– Design a test, create the test data, run the test and fix it if
it’s broken, write the test script, run the script and debug
it, evaluate the test results, write the bug report, do
maintenance on the test and the script
• When we automate a test, we use software to do one or more
of the testing activities
– Common descriptions of test automation emphasize test
execution and first-level interpretation
– But that automates only part of the task
– Automation of any part is automation to some degree
• All software tests are to some degree automated and no
software tests are fully automated.
4
7. What is “high volume” automated testing?
• Imagine automating so many testing activities
– That the speed of a human is no longer a constraint on
how many tests you could run
– That the number of tests you run is determined more by
how many are worth running than by how much time they
take
• This is the underlying goal of HiVAT
5
8. Once upon an N+1th time…
•
•
Imagine developing an N+1th version of a program
– How could we test it?
We could try several techniques
1. Long sequence regression testing. Reuse regression tests from previous
version. For those tests that the program can pass individually, run them
in long random sequences
2. Program equivalence testing.
•
Start with function equivalence testing. For each function that exists
in the old version and the new one, generate random inputs, feed
them to both versions and compare results. After a sufficiently
extensive series, conclude the functions are equivalent
•
Combine tests of several functions that compare (old versus new)
successfully individually.
3. Large-scale system comparison by analyzing large sets of user data from
the old version (satisfied user attests that they are correct). Check that
the new version yields comparable results.
•
Before running the tests, run data qualification tests (test the
plausibility of the inputs and user-supplied sample outputs). The
goal is to avoid garbage-in-garbage-out troubleshooting.
6
9. Working definitions of HiVAT
• A family of test techniques that use software to generate,
execute and interpret arbitrarily many tests.
• A family of test techniques that empower a tester to focus on
interesting results while using software to design, implement,
execute, and interpret a large suite of tests (thousands to
billions of tests).
7
10. Breaking out potential benefits of HiVAT
(different techniques different benefits)
•
•
•
Find bugs we don’t otherwise know how to find
• Diagnostics-based
• Long sequence regression
• Load-enhanced functional testing
• Input fuzzing
• Hostile data stream testing
Increase test coverage inexpensively
• Fuzzing
• Functional equivalence
• High-volume combination testing
• Inverse operations
• State-model based testing
• High-volume parametric variation
Qualify large collections of input or output data
• Constraint checks
• Functional equivalence testing
8
11. Examples of the techniques
• Equivalence testing
– Function (e.g. MASPAR)
– Program
– Version-to-version with shared data
• Constraint checking
– Continuing the version-to-version comparison
• LSRT
– Mentsville
• Fuzzing
– Dumb monkeys, stability and security tests
• Diagnostics
– Telenova PBX intermittent failures
9
12. Oracles Enable HiVAT
• Some specific to the testcase
– Reference Programs: Does the SUT behave like it?
– Regression Tests: Do the previously-expected results still happen?
• Some independent of the testcase
– Fuzzing: Run to crash, run to stack overflow
– OS/System Diagnostics
• Some inform on only a very narrow aspect of the testcase
– You’d never dream that passing the oracle check means passing the
test in all possible ways
– Only that passing the oracle check means the program does not fail in
this specific way
– E.g. Constraint Checks
• The more extensive your oracle, the more valuable your high-volume test
suite
– What potential errors have we covered?
– What potential errors have to be checked in some other way?
10
13. Reference Oracles are Useful but Partial
Based on notes from Doug Hoffman
Program state, (and uninspected outputs)
Program state
System state
Intended inputs
System
under
test
System state
Monitored outputs
Impacts on connected devices / resources
Configuration and system resources
To cooperating processes, clients or servers
Cooperating processes, clients or servers
Program state, (and uninspected outputs)
Program state
System state
System state
Intended inputs
Configuration and system resources
Cooperating processes, clients or servers
Reference
function
Monitored outputs
Impacts on connected devices / resources
To cooperating processes, clients or servers
14. Using Oracles
• If you can detect a failure, you can use that oracle in any test,
automated or not
• Many ways to combine oracles, as well as using them one at a
time
• Each has time costs and timing costs
• Heisenbug costs mean you might use just one or two oracles
at a time, rather than all the possible oracles you know, all at
once.
12
16. Basic Ingredients
• A Large Problem Space
– The payoff should be worth the cost of setting up the computer to do
detailed testing for you
• A Data Generator
– Data generation often requires more human effort than we expect.
This constrains the breadth of testing we can do, because all human
effort takes time.
• A Test Runner
– We have to be able to read the data, run the test, and feed the test to
the oracle without human intervention. Probably we need a
sophisticated logger so we can trace what happened when the
program fails.
• An Oracle
– This might tell us definitively that the program passed or failed the
test or it might look more narrowly and say that the program didn’t fail
in this way. In either case, the strength of the test is driven by your
ability to tell whether it failed. The narrower the oracle, the weaker
the technique.
14
17. Two Examples and a Future
• Function equivalence tests (testing against a reference
program)
• Long-sequence regression (repurposing already-generated
data with new oracles)
• A more flexible HiVAT architecture
15
18. Functional Equivalence
• Oracle: A Reference Program
• Method:
– The same testcases are sent to the SUT and to the Reference
Program
– Matched behaviors are considered a Pass
– Discrepant behaviors are considered a Fail
• Limits:
– The reference program might be wrong
– We can only test comparable functions
– Generating semantically meaningful inputs can be hard,
especially as we create nontrivial ones
• Benefits: For every comparable function or combination of
functions, you can establish that your SUT is at least as good as the
Reference Program
16
19. Functional Equivalence Examples
• Square Root calculated one way vs. Square Root calculated
another way (Hoffman)
• Open Office Calc vs. a reference implementation
– Comparing
• Individual functions
• Combinations of functions
– Reference
• Currently reference formulas programmed in Ruby
• Coming (maybe by STAR) compare to MS-Excel
17
21. Functional Equivalence: Key Ideas
• You provide your Domain Knowledge
– How to invoke meaningful operations for your system
– Which data to use during the tests
– How to compare the resulting data states
• Test Runner sends the same test operations to both the
Reference Oracle and the System Under Test (SUT)
– Collects both answers
– Compares the answers
– Logs the result
19
22. Function Equivalence: Architecture
• Component independence allows reusability for other
applications and for later versions of this application.
– We can snap in a different reference function
– We can snap in new versions of the software under test
– The test generator has to know about the category of
application (this is a spreadsheet) but not which particular
application is under test.
– In the ideal architecture, the test running passes test
specifications to the SUT and oracle without knowing
anything about those applications. To the maximum extent
possible, we want to be able to reuse the test runner code
across different applications
– The log interpreter probably has to know a lot about what
makes a pass or a fail, which is probably domain-specific.
20
24. Long-Sequence Regression
• Oracles:
– OS/System diagnostics
– Checks built into each regression test
• Method:
– Reconfigure some known-passing regression tests to
ensure they can build upon one another (i.e. never reset
the system to a clean state)
– Run a subset of those regression tests randomly,
interspersed with diagnostics
– When a test fails on the N+1st time that we run it, after
passing N times during the run, analyze the diagnostics to
figure out what happened and when in the sequence
things first went wrong
22
25. Long-Sequence Regression
• Limits:
– Failures will be indirect pointers to problems
– You will have to analyze the logs of diagnostic results
– You may need to rerun the sequence with different
diagnostics
– Heisenbug problem limits the number of diagnostics that
you can run between tests
• Benefits:
– This approach is proven good for finding timing errors and
memory misuse
23
28. Test Runner/Result Checker Operation
•
•
•
•
•
•
•
•
•
Diagnostics
A Regression Test
Diagnostics
A Regression Test
Diagnostics
A Regression Test
Diagnostics
……..
Final Diagnostics
26
29. Long-Sequence Regression: Key Ideas
• Pool of Long-Sequence-Compatible Regression Tests
– Each of these is Known to Pass, individually, on this build
– Each of these is Known to NOT reset system state
(therefore it contributes to simulating real use of the
system in the field)
– Each of these is Known to Pass when run 100 times in a
row, with just itself altering the system (if not, you’ve
isolated a bug)
• Pool of Diagnostics
– Each of these collects something interesting about the
system or program: memory state, CPU usage, thread
count, etc.
27
30. Long-Sequence Regression: Key Ideas
• Selection of Regression Tests
– Need to run a small enough set of regression tests that
they will repeat frequently during the elapsed time of the
Long Sequence
– Need to run a large enough set of regression tests that
there’s real opportunity for interactions between functions
of the system
• Selection of Diagnostics
– Need to run a fixed set of diagnostics per Long Sequence,
so that you collect data to compare Regression Test failures
against
– Need to run a small enough set of diagnostics that the act
of running the diagnostics doesn’t significantly change the
system state
28
31. Long-Sequence Regression: Reference Code
• Work in Progress (not currently available)
– http://testingeducation.org/
– See HiVAT section of site
29
33. Maadi HiVAT Architecture
• Maadi is a generalized HiVAT architecture
– Open Source and written in Ruby
– Developed to support multiple HiVAT techniques
– Developed to support multiple applications and workflows
– Based on demonstrations from previous implementations
• Architecture is divided into Core and Custom components
– Custom components are where the specific
implementations reside
• Maadi, one of the earliest Egyptian mines
– Implemented in Ruby, used RubyMine IDE, etc.
31
35. Maadi Core Components
•
•
•
•
•
•
•
•
•
•
•
Application – Ruby Interface to SUT
Collector – Logging Interface
Monitor – Diagnostic Utility Interface
Tasker – Resource Consumption Interface
Organizer – Implements the Test Technique
Expert – Domain Expert which builds “skeleton” Procedures
Generator – Mediates interaction between Organizer and
Expert to assemble a series of Tests
Scheduler – Collects and orders the completed Procedures
Controller – Mediates interaction between Scheduler and the
Applications, Monitors, and Taskers to conduct tests
Analyzer – Performs analysis of collected Results
Manager – Overall Test Conductor
33
36. HiVAT Test Process
• Configure Environment
– CLI enables scripted runs via a profiles *
• Prepare Components
– Validate component compatibility
– Generate Test Cases, Schedule Test Cases, Log Test Cases
– Log Configuration
• Execute Test Plan
– Record results
• Generate Test Report
* Entire process can be scripted via profiles; start to finish
34
37. Language Restrictions
• Applications, Organizer and Expert must all understand the
same “procedural” language in order to communicate the
selection, construction, and execution of a test case
– Using a SQL Domain Expert to generate Test Cases for
Microsoft Solitaire is not useful
– Controller will validate that all Applications and the
Organizer can accept the Domain Expert
• Applications and Experts must also understand a second
“result” language in order to capture the results
35
38. Test Plan Complexities
• Functional Equivalence exposed a challenge:
– How to flexibly specify test details that required random
choice on-the-fly
• Early Attempt: Try to Pre-Specify all the details you want to
have vary:
– Plan 1: log (RNG_1)
• CONSTRAINT RNG_1: Real, >0
– Plan 2: Sum (0...RNG_1) of CellValue (RNG_2)
• CONSTRAINT RNG_1: Integer, >0
• CONSTRAINT RNG_2: Real
– Plan 3: Sum (0...RNG_1) of CellValue (log (RNG_2))
• CONSTRAINT RNG_1: Integer, >0
• CONSTRAINT RNG_2: Real, >0 Delta= 0
36
39. Test Plan Complexities (2)
• This gets awkward quickly:
– Plan 4: (Sum (0...RNG_1) of CellValue (log(RNG_2)))
/RNG_1
• CONSTRAINT RNG_1: Integer, >0
• CONSTRAINT RNG_2: Real, >0
• Really want to just specify:
– The individual elements with their individual constraints
– The rules for combining the elements together
• Then dynamically assemble the pieces in sensible but
infinitely variable ways at test generation time
• New Approach: Mediate a Conversation with the Expert
during test generation time
37
40. Generator Mediation
• Generator is responsible for generating a test case
– Interaction between the Organizer and the Expert
produces a test procedure
– Generator supervises and terminates as necessary
• Test Case construction process overview:
1. Organizer selects a Test to build
2. Expert builds a skeleton, adds configurable parameters
3. Organizer populates parameters
4. Expert inspects parameter selection, expert may
a) add additional parameters (return to step 2)
b) flag the procedure as complete or failed
38
42. Organizer: One Example Test Plan
• Organizer is responsible for constructing a set of test
procedures according to the Test Plan
– The Test Plan is really the resultant set of Test Procedures
– Organizers could be as simple as a Fuzzer
• Test Plan could be satisfied by randomly select a value
for any configurable parameter
– Organizers could be as complex as desired and could be as
specific as required
• Test Plan could only be satisfied by successfully
constructing a specific database
40
43. Organizer: One Example Test Plan (2)
• Example of a Test Plan
– CREATE a TABLE tblStudents in dbUniversity with 5
columns
• s_ID as a PRIMARY KEY, AUTO INCREMENT, INT
• s_FamilyName as VARCHAR(255), NOT NULL
• s_Program as VARCHAR(32), NOT NULL
• s_Major as INTEGER, FOREIGN KEY( tblMajors, m_ID)
• s_Enrolled as DATETIME
– CREATE a TABLE tblStudentClasses in dbUniversity with 3
columns
–…
• Following example will illustrate the process of building a
procedure to according to the Test Plan
41
44. Example Procedure Construction
• Participants
– Expert: Structured Query Language (SQL) Domain Expert
– Organizer: Construct University Database
• Beginning Test Selection
• Expert provides a list of possible tests
– the SQL Expert provides:
• CREATE, SELECT, INSERT, UPDATE, DELETE, DROP
• Organizer selects test, requests procedure
– the Organizer would choose:
• CREATE
• After the Test has been chosen, the Organizer can request
that the Expert provide a Test Procedure
42
45. Example Procedure Construction (2)
• Expert’s Initial response to choosing CREATE
id: CREATE-NEW-NEXT
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] =
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
• Expert returns procedure with 1 parameter: [CREATE-TYPE]
– Organizer has 2 options; choose TABLE or DATABASE
• Each parameter has a constraint which the Organizer should
follow
– Many different types of constraints possible
– Expert may or may not enforce the constraint; if not
followed the procedure may be flagged as invalid
• Organizer returns procedure with option selected
43
46. Example Procedure Construction (3)
• Organizer’s Test Plan requires that the Procedure for a CREATE
TABLE be built
– Organizer selects TABLE for the [CREATE-TYPE] parameter
• Organizer returns the following procedure to the Expert
id: CREATE-NEW-NEXT
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
• Use of Step and Parameters is part of the agreed upon
Procedural language
– In the case of the SQL example, the parameters will be
substituted inline to construct the desired SQL command
44
47. Example Procedure Construction (4)
• Expert’s response to [CREATE-TYPE] selection
id: CREATE-TABLE-PICK-DATABASE
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
• Expert returns a procedure with no new parameters
– Procedure ID has changed
• Expert utilizes Procedure for determining next actions
• Organizer has no selections to make but the procedure
is not flagged as being complete, therefore it should
return the procedure to the Expert for the next update
45
48. Example Procedure Construction (5)
• Expert’s response to returned procedure
id: CREATE-TABLE-NEW
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
* parameter: [DATABASE-NAME] =
constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity)
• Expert returns procedure, which has 1 NEW parameter;
[DATABASE-NAME]
– [DATABASE-NAME] has a constraint in which the only two
valid values are either HiVAT or dbUniversity
– Procedure ID has changed (again)
• Expert has not modified or removed existing [CREATE-TYPE]
parameter
46
49. Example Procedure Construction (6)
• Organizer selects value according to Test Plan
– Organizer selects HiVAT for the [DATABASE-NAME] parameter
• Organizer returns the following procedure to the Expert
id: CREATE-TABLE-NEW
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
* parameter: [DATABASE-NAME] = dbUniversity
constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity)
• Organizer has only selected a parameter which was previously
empty
– If a previously selected parameter is changed, it may cause the
Expert to flag the Procedure as Failed (e.g. setting [CREATETYPE] to DATABASE
47
50. Example Procedure Construction (7)
• Expert updates procedure and returns it to the Organizer
id: CREATE-TABLE-PARAMS
step: CREATE TABLE [TABLE-NAME]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
* parameter: [DATABASE-NAME] = dbUniversity
constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity)
* parameter: [TABLE-NAME] =
constraint: ALPHANUMERIC WORD (LENGTH MIN: 5, MAX: 25)
* parameter: [TABLE-COLUMNS] =
constraint: RANGED-INTEGER (MIN: 1, MAX: 25)
• The procedure has 2 NEW parameters
– [TABLE-NAME] has an ALPHANUMERIC constraint
– [TABLE-COLUMNS] has a RANGED INTEGER constraint
– Procedure ID and Step name have also changed
48
51. Example Procedure Construction (8)
• Organizer selects values according to Test Plan
– Organizer selects
• tblStudents for the [TABLE-NAME] parameter
• 5 for the [TABLE-COLUMNS] parameter
• Organizer returns the following procedure to the Expert
id: CREATE-TABLE-PARAMS
step: CREATE TABLE [TABLE-NAME]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
* parameter: [DATABASE-NAME] = dbUniversity
constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity)
* parameter: [TABLE-NAME] = tblStudents
constraint: ALPHANUMERIC WORD (LENGTH MIN: 5, MAX: 25)
* parameter: [TABLE-COLUMNS] = 5
constraint: RANGED-INTEGER (MIN: 1, MAX: 25)
49
52. Example Procedure Construction (9)
• Exchange between the Expert and Organizer continues until
– Expert flags the Procedure as Complete
– Expert flags the Procedure as Failed
– Generator determines that the maximum number of exchanges
have occurred *
• Organizer and Expert may not be able to agree on
acceptable test
• All procedures (including Failed procedures) are returned to the
Controller
– Controller is configured to discard failed procedures
– If too many failed procedures are detected the user is notified
and test is terminated *
* User configurable values
50
53. Maadi: Key Ideas
• Exploit the commonalities across each HiVAT technique
• Enable a conversation with the Knowledge Experts to
empower the tests to be more dynamic, yet still sensible
51
54. Maadi: Reference Code
• Work in Progress
– http://testingeducation.org/
– See HiVAT section of site
52
55. Is this the future of testing?
• Maybe, but within limits
• Expensive way to find simple bugs
– Significant troubleshooting costs: shoot a fly with an
elephant gun and discover you have to spend enormous
time wading through the debris to find the fly
• Ineffective way to hunt for design bugs
• Emphasizes the families of tests that we know how to code
rather than the families of bugs that we need to hunt
As with all techniques
These are useful under some circumstances
Probably invaluable under some circumstances
But ineffective for some bugs and inefficient for others
53