This is the abstract published in the IEEE S/W Engineering Conference proceedings in the industry experience sharing section of the Intl Workshop on Combinatorial Testing. We share the experience and serendipitous findings of how CT can be used for early requirements validation.
“Software Testing is the process of executing a program or system with the intent of finding errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results”
Currently Agile is one of the highly practiced methodologies. Agile is an evolutionary approach to software development which is performed in a highly collaborative manner by self-organizing teams that produces high quality software in a cost effective and timely way which also meets the changing needs of its stakeholders. The software is delivered to the customer very quickly; customer checks it for errors and sends some new changes and requirements to include before the last iteration. So, user is provided with a chance to test the product and provide the team with feedback about the working and the functionality of the system. Agile development approach believes in the involvement and frequent communication between the developer team and stakeholders, and regular delivery of functionality. According to Agile development, people are more important than processes and tools; and the customer must be involved in the entire process.
“Software Testing is the process of executing a program or system with the intent of finding errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results”
Currently Agile is one of the highly practiced methodologies. Agile is an evolutionary approach to software development which is performed in a highly collaborative manner by self-organizing teams that produces high quality software in a cost effective and timely way which also meets the changing needs of its stakeholders. The software is delivered to the customer very quickly; customer checks it for errors and sends some new changes and requirements to include before the last iteration. So, user is provided with a chance to test the product and provide the team with feedback about the working and the functionality of the system. Agile development approach believes in the involvement and frequent communication between the developer team and stakeholders, and regular delivery of functionality. According to Agile development, people are more important than processes and tools; and the customer must be involved in the entire process.
This report from DCG Software Value discusses whether or not function points are still relevant in the IT world, given all the innovative changes and processes that have occurred.
This presentation tells in brief the solutions provided by Impetus\'s Testing Center of Excellence "qLabs". Please send in your comments at qLabs@impetus.co.in
http://www.impetus.com/qLabs
Effective performance engineering is a critical factor in delivering meaningful results. The implementation must be built into every aspect of the business, from IT and business management to internal and external customers and all other stakeholders. Convetit brought together ten experts in the field of performance engineering to delve into the trends and drivers that are defining the space. This Foresights discussion will directly influence Business and Technology Leaders that are looking to stay ahead of the challenges they face with delivering high performing systems to their end users, today and in the next 2-5 years.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
With every passing day, organizations are becoming more and more mindful about the performance of their Software Products. However, most of them still on look-out for the basics of Performance Engineering.
According to a recent study by Gartner, fixing performance defects near the end of the development cycle costs 50 to 100 times more than the cost required for fixing it during the early phase of development. Hence, if a product suffers from serious performance issues it can be completely scrapped.
Performance Engineering ensures that your application is performing as per expectations and the software is tested and tuned to meet specified or even the unstated performance requirements.
We present you with a webcast on Performance Engineering Basics that would walk you through the elements and process of performance engineering, and also offers a methodical process for the same.
It also offers details on a load testing tool, and describes how best to utilize it.
Visit http: http://www.impetus.com/featured_webcast?eventid=10 to listen to the entire webcast (20 minutes).
OR
To post any queries on Performance Engineering, write to us at isales@impetus.com
For case studies and articles on performance engineering please visit: http://www.impetus.com/plabs/casestudies?case_study=&pLabsClustering.pdf=
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
Top Five Secrets for a Successful Enterprise Mobile QA Automation StrategyCognizant
From tool selection through choosing the best framework, here are five ways quality assurance teams can tilt the odds of successful digital transformation in their favor.
This report from DCG Software Value discusses whether or not function points are still relevant in the IT world, given all the innovative changes and processes that have occurred.
This presentation tells in brief the solutions provided by Impetus\'s Testing Center of Excellence "qLabs". Please send in your comments at qLabs@impetus.co.in
http://www.impetus.com/qLabs
Effective performance engineering is a critical factor in delivering meaningful results. The implementation must be built into every aspect of the business, from IT and business management to internal and external customers and all other stakeholders. Convetit brought together ten experts in the field of performance engineering to delve into the trends and drivers that are defining the space. This Foresights discussion will directly influence Business and Technology Leaders that are looking to stay ahead of the challenges they face with delivering high performing systems to their end users, today and in the next 2-5 years.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
With every passing day, organizations are becoming more and more mindful about the performance of their Software Products. However, most of them still on look-out for the basics of Performance Engineering.
According to a recent study by Gartner, fixing performance defects near the end of the development cycle costs 50 to 100 times more than the cost required for fixing it during the early phase of development. Hence, if a product suffers from serious performance issues it can be completely scrapped.
Performance Engineering ensures that your application is performing as per expectations and the software is tested and tuned to meet specified or even the unstated performance requirements.
We present you with a webcast on Performance Engineering Basics that would walk you through the elements and process of performance engineering, and also offers a methodical process for the same.
It also offers details on a load testing tool, and describes how best to utilize it.
Visit http: http://www.impetus.com/featured_webcast?eventid=10 to listen to the entire webcast (20 minutes).
OR
To post any queries on Performance Engineering, write to us at isales@impetus.com
For case studies and articles on performance engineering please visit: http://www.impetus.com/plabs/casestudies?case_study=&pLabsClustering.pdf=
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
Top Five Secrets for a Successful Enterprise Mobile QA Automation StrategyCognizant
From tool selection through choosing the best framework, here are five ways quality assurance teams can tilt the odds of successful digital transformation in their favor.
How can banks achieve assured release through effective user acceptance testingMaveric Systems
Similar urgency is also seen in product replacements and technology upgrades targeted towards better customer experience and to meet demanding regulatory requirements, all at short notice.
White paper quality at the speed of digitalrajni singh
Our modern testing practices help speed up the current scope of quality assurance with help of a cognitive approach. Here is the link to download my published whitepaper on "Quality at the Speed of Digital" https://www.nagarro.com/qa-at-the-speed-of-digital #qualityassurance
An Ultimate Guide to Continuous Testing in Agile Projects.pdfKMSSolutionsMarketin
As more businesses apply Continuous Integration and Continuous Delivery (CI/CD) to release their software faster, Continuous testing becomes the final piece that completes a continuous development process. By automatically testing code right after developers submit it to the repository, testers can locate bugs before another line of code is written.
With interconnectivity between IT Service Providers and their customers and partners growing, fueled by
proliferation of IT Services Outsourcing, with some providers gaining leading positions in marketplace
today, challenges are faced by teams who are tasked to deliver integration projects with much desired
efficiencies both in cost and schedule. Such integrations are growing both in volume and complexity.
Integrations between different autonomous systems such as workflow systems of the providers and their
customers are an important element of this emerging paradigm. In this paper we present an efficient model
to implement such interfaces between autonomous workflow systems with close attention given to major
phases of these projects, from requirement gathering/analysis, to configuration/coding, to
validation/verification, several levels of testing and finally deployment. By deploying a comprehensive
strategy and implementing it in a real corporate environment, a 10%-20% reduction in cost and schedule
year over year was achieved for past several years primarily by improving testing techniques and detecting
bugs earlier in the development life-cycle. Some practical considerations are outlined in addition to
detailing the strategy for testing the autonomous system integrations domain.
Drive Business Excellence with Outcomes-Based Contracting: The OBC ToolkitCAST
Making Outcomes-Based Contracting Work With Facts
Introduction by Amit Anand, Robert Asen & Vijay Anand of Cognizant
Using metrics to develop effective results-based contracts
Managing outcome based application contracts requires a combination of scope management,
pricing, and, above all, quality. As suppliers and clients evolve the relationship, the
need for clear facts dominates conversations.
The premise of outcomes-based contracting is that hours (and indeed rate) are inputs to
the ADM process (not outputs), and that structures that measure programming results are
now both possible and achievable. Outcomes-based structures bring the original intent of
software to the forefront—creating successful results. While many companies have shifted
from input-based to output-based contracting, forward-thinking IT leaders are also taking
steps to define a sustainable outcomes-based relationship with their ADM suppliers.
Outcomes-based contracts focus on how the delivered product adds value, while inputand
output-based contracts focus on the resources and the activities needed to deliver the
outcome, respectively.
Presentation on the promises and pitfalls of applying Agile in a Quality Management System. How do you get the benefits of agile while maintaining quality and regulatory compliance?
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
2. The hybrid approach enabled coverage prioritization, i.e.
higher level of coverage on the attributes which are critical to
testing and lower coverage for those which are not so critical
yet important to be covered in the tests. The manually
designed tests and the CTD models were shared with the client
for a collaborative review and final sign off.
III. FINDINGS FROM INTERACTIVE REVIEW SESSIONS
In the CTD review session, we observed that the clients and
business analysts were able to understand the CTD model
construct with more ease than the manual test cases. During
the very first functional module review, where we reviewed
two CTD models covering 122 test cases and 61 manual test
cases; the clients were able to provide insights on the attributes
and interactions. They also shared inputs on the levels of
interactions they would require. We were able to modify the
model, add the attributes and change the interaction levels and
share the coverage impact.
The first model originally had six attributes and had 16
restrictions (business rules). It resulted in 49 good path tests
and one bad path test. Based on the client review – we added
two more attributes to the original six that were understood
from the requirements and system design specifications. This
was done real-time in the tool during the review and the test
combinations were regenerated instantaneously in the CTD
tool for client approval and sign off. This greatly influenced
their confidence in the test solution and coverage.
The client review of the single-click verification steps for
which we had adopted a parallel manual design process was
not so smooth. Interestingly, the client was not able to easily
visualize the steps and the test actions and found it difficult to
distinguish between precursor steps and final validations. They
wanted test cases to be decoupled / broken down to provide
more clarity. The manual tests also tended to follow screen
actions instead of functionality. The tests also required
elaboration increasing the number of test cases required to
provide the structure, clarity and coverage required. Even
though the content was present, it required rewriting as the
flow was not evident.
We observer that the CTD model on the other hand could
clearly articulate at a glance the approach taken to testing the
products / brands irrespective of how the data and responses
where provisioned and irrespective of the technical landscape.
Clients were able to follow the flow of the tests as a business
process viewed as a customer’s record journey.
The business analysts / clients could quickly
• Comprehend the combinations from a business
standpoint
• Relate the CTD model output to the requirements and
business behavior
• Directly add value in terms of commonly expected
actions from end-customers
• Provide review inputs and recommend changes in an
online review mode
• Establish areas of increased test weightage to skew
the tests towards functionality that would typically be
used by end customers.
Overall, using CTD resulted in getting rapid consensus on the
features and functionality. It helped in rapidly building client
confidence in the test content and approach. The necessary test
coverage was established and the sign off on the Test
Readiness Review was obtained in time for the SIT schedule.
IV. CONCLUSION
Using a tool based approach for test design leveraging
Combinatorial Testing ensured active participation and
collaboration between the client, business analysts and testers
to build the right tests. Functional gaps in the requirements are
naturally highlighted without referencing back to inadequate
requirements, open questions and uncertain application
behavior, interface specifications or related queries that were
being unanswered in the past. Thus, helped to create a strong
interactive and collaborative environment.
Given the rapidity with which functional queries and
clarifications were addressed, it not only becomes obvious that
CT models should be the default review and Test Readiness
signoff criterion. More importantly, CT can be used as an
early shift left lever to not just clarify requirements but to also
define requirements and functional actions upfront. Based on
this experience, we expect that CT can also become a powerful
tool for driving Agile Development. CT tools can be used to
craft and elaborate user stories, establish requirements
interaction and complexity ranking. CT will enable business to
achieve early quality and speed to market by enabling for Test
Driven Business Definition and Requirements Elaboration.
ACKNOWLEDGMENT
We thank the team, the managers and the account leaders of
the project experience shared – Shaji K Namath and Sanjiv
Gupta. Special thanks to Reenav Gala and Dalibor Stavek,
business analysts from the IBM team who helped with the
model design and owned explaining the functionality to the
clients. Roopa Vasan’s planning and her team for developing
the CTD models. Our deepest gratitude to Phil Stead for
driving this agenda with the client and for being that bridge to
make it succeed. He defined and set the context upfront.
Thanks to Andrew Williams of the IBM IGNITE Leadership
team for being encouraging as always! Finally, without
Amitava Sharma, who gave us the great freedom to explore and
do things differently, we would not have experienced this great
step change in how CT modelling can be used.
REFERENCES
[1] Krishnan, R and Krishna, S Murali and Nandhan, P Siva, “Combinatorial
testing: learnings from our experience,” ACM SIGSOFT Software
Engineering Notes, vol. 32, no. 3, pp. 100-108, 2007