SlideShare a Scribd company logo
Value-Inspired Testing:
    Renovating Risk-Based Testing, and
       Innovating with Emergence v1.0
Neil Thompson                                            NeilT@TiSCL.com   @neilttweet
                                                                           @neiltskype
Thompson information Systems Consulting Ltd              www.TiSCL.com     +44 (0)7000 NeilTh
                                                                                    (634584)
23 Oast House Crescent,
Farnham,
Surrey,
GU9 0NP
England, UK



Abstract
Is testing “dead”? Some parts are declining, but
evolution can inspire survival.

To renovate use of risk;
       collate current variants, eg “Risk-Based”,
       “Risk-Driven”;
       use a context-driven mix of principles;
       prioritise testing from high to low (not zero);
       consider value as benefits minus risks;
       remember risk applies throughout testing,
       from static testing through execution, bug-
       fixing and beyond;
       integrate risk into Value Flow ScoreCards to
       manage across complementary views of
       quality.

To innovate:
       consider evolution in nature: periods of
       ecosystems in stability, punctuated by
       innovative disturbances;
       as genes evolve in biology, “memes” evolve
       in thinking;
       testing’s history suggests some specific
       memeplexes;
       natural innovation seems to emerge on a
       path between “excess order” and “excess
       chaos”;
       could testing evolve similarly? Try Johnson’s
       “Where good ideas come from”.

So: VIVVAT Value-Inspired Verification, Validation
And Testing! Please join me in exploring our future.


                                                                                            1
Contents
Abstract ............................................................................................................................................................................... 1
0. Introduction .................................................................................................................................................................... 3
1. Renovating the use of Risk in testing .............................................................................................................................. 4
   1.1 Current variants of Risk-Based Testing etc ................................................................................................................ 4
   1.2 Context-driven mix of available principles ................................................................................................................ 5
   1.3 Risk-Graded Testing .................................................................................................................................................. 5
   1.4 Value-Graded Testing ................................................................................................................................................ 6
   1.5 Value-Inspired Testing............................................................................................................................................... 6
   1.6 Value Flow ScoreCards .............................................................................................................................................. 8
       Through the lifecycle ................................................................................................................................................... 8
       Integrating risk .......................................................................................................................................................... 10
2. Innovating in testing, using Emergence concepts ......................................................................................................... 12
   2.1 Evolution in Nature ................................................................................................................................................. 13
       Biology....................................................................................................................................................................... 14
       Relationship with other sciences .............................................................................................................................. 16
   2.2 Evolution of Software Testing ................................................................................................................................. 16
       The view ahead ......................................................................................................................................................... 16
       The story so far ......................................................................................................................................................... 17
   2.3 Genes to Memes ..................................................................................................................................................... 18
   2.4 Memeplexes in the History of Testing .................................................................................................................... 18
   2.5 Emergence between “Too Much Chaos” and “Too Much Order” .......................................................................... 21
   2.6 Innovation and Ideas for Testing ............................................................................................................................. 21
3. VIVVAT Value-Inspired Verification, Validation And Testing ........................................................................................ 25
References and Acknowledgements ................................................................................................................................. 26




                                                                                                                                                                                        2
0. Introduction
The theme of this EuroSTAR 2012 conference is "Innovate & Renovate: Evolving Testing". In his call for submissions,
programme chair Zeger van Hese included a quotation from William Edwards Deming: "Learning is not compulsory...
neither is survival." This is presumably a veiled threat – if we don’t learn, we may not survive. But is it already too
late? Several speakers have recently alleged that testing is dead, or some very similar message:
         Tim Rosenblatt (Cloudspace blog 22 Jun 2011) “Testing Is Dead – A Continuous Integration Story For Business
         People”;
         James Whittaker (STARWest 05 Oct 2011) “All That Testing Is Getting In The Way Of Quality”; and
         Alberto Savoia (Google Test Automation Conference 26 Oct 2011) “Test Is Dead”.

There may be others. But I suggest that at least some of these commentators seem to be talking mainly about “the
testing phase”, with an emphasis on functional testing, “independent” of the developers. They mean in particular
purveyors of standard, manual testing, which is increasingly offshored or automated. No-one seems to think that
performance, security or privacy testing is dead. No-one seems to be suggesting that developers have stopped testing
and so should everyone else. It is more a question of who and how.

So in this paper, when I talk about testing, I mean all of testing. I include:
          not just dynamic testing (executing software), but various kind of static testing, eg reviews;
          not just functional testing but all the non-functional (or para-functional) types – and this list itself may evolve.

I consider what we can learn from the history of testing and its place in the “ecosystems” of IT products and projects.
Testing has been called many things: an art, a craft, and more recently some people (including myself) have been
trying to make it more of a science – even if that means it is a social science (as Cem Kaner argues).




I think of testing as “value flow management” – we should be facilitating / assisting / monitoring / measuring /
improving / optimising (according to your taste, context and role) the flow of value all the way from ideas in people’s
heads (initial requirements) through to not only implemented but also service-managed, supported and maintained
systems and services, in their human context. To do this, in today’s environment of increasingly-rapid, innovative and
pervasive change, we do need to renovate and innovate. When holistic and evolving, testing will not die (and must not
be allowed to die). I choose to focus on:

         renovating the increasingly fragmented and apparently-neglected subject of risk-based testing; and
         using analogies from science and evolution to inspire ideas for innovation in testing generally.

                                                                                                                            3
1. Renovating the use of Risk in testing
1.1 Current variants of Risk-Based Testing etc
The first step in renovation is to collate what variants of “Risk-Based Testing” (or related terms) are around, and how
we arrived at this situation. The diagram below shows a simplified flow over time, from left to right.




The early books by Hetzel, Myers and Beizer all contained some notions of testing as depending on principles of risk,
but this was mostly implicit. Then in the later 1980s and through the 1990s, basing testing on risk became explicit as
statement of theory. But you wait ages for guidance on how to practically do risk-based testing, then in 2002 three
books came along at once!
         Paul Gerrard, drawing on the earlier work of James Bach and others, published Risk-Based E-Business Testing,
         the theme of which was imagining what could go wrong with a system, then designing tests to address those
         risks. I was co-author of that book.
         Craig & Jaskiel described in Systematic Software Testing a somewhat different view of risk-based testing,
         which prioritised software features and attributes according to risk (its current version is called risk-driven
         testing, and has no doubt evolved since then);
         Kaner, Bach & Pettichord published Lessons Learned in Software Testing, which included context-driven
         versions of both of the above variants, but distinguished them as risk-based test design and risk-based test
         management respectively.

Since then, I have seen a variety of approaches, published in books, papers or as proprietary methods. I meet many
people who tell me they know what risk-based testing is, it’s quite easy to do, and it’s “not that stuff over there, that’s
not risk-based testing”. I think these are all useful to some degree, but I believe they are all partial views (either
focussing on the prioritisation side or on the risks-as-test-entities side), some seem to be too prescriptive / too
simplistic / too complex, and I do not believe that risk-based testing is easy. Not good risk-based testing, anyway.

The field seems to be fragmented; and it no longer seems to receive the attention it used to. Fashion has moved on to
other subjects. Are some people just paying lip-service to risk-based testing? How many people are doing it well? How
does it relate to /merge into safety-critical methods? In 2007 I integrated the two main aspects of risk-based testing
into my Holistic Test Analysis & Design method, but that is only part of the story (and does not yet have tool support).

I think it is time for a broad re-appraisal of the whole subject – away from one-size-fits-all, to be more inclusive of
various approaches, more responsive to context.
                                                                                                                          4
1.2 Context-driven mix of available principles
I would like to see more cross-fertilisation and unification between the “upper and lower halves”, sometimes called
risk-based test management and risk-based test design. On some projects these are done by different people of
course, but not always. And anyhow, the two halves should fit together. One way (and it is only just one choice) is to
mirror-image James Bach’s Heuristic Test Strategy Model (HTSM), as illustrated below.




The lower half is borrowed straight from the HTSM, and the upper half is modified to show similar usage for
prioritisation of work. I do not mean simply “do this first, then that...” – decisions need to be made on what to
prioritise, and how. The message here is that we should be ready to mix and match methods and techniques from the
variety available, depending on context factors.

1.3 Risk-Graded Testing
One thing I feel compromises the respectability of risk-based testing in some situations is the notion that having
prioritised things, we can set a cut-off threshold below which things are not tested. A better way, I believe, is to
“grade” coverage and/or effort, from low (not zero) to high, according to the selected risk factors.




                                                                                                                         5
I think “Risk-Graded Testing” might be a better term here than Risk Based Testing. One reason is that Risk-Based
Testing aligns with the term Test Basis, often used to mean a document or other oracle against which tests are
designed. Another reason is that it distances itself from cruder notions of prioritisation, and from cut-off thresholds.

1.4 Value-Graded Testing
Taking this a step further: we should grade testing coverage / effort not only by risk, but also the varying benefits of
the features being tested. There is a partial correlation, because features which have high benefits will also tend to
have high business impact if they go wrong, but it is worth making the distinction because considering the benefits
may generate specific test ideas and inform the selection of test techniques. Particularly In agile methods, if a feature
is exhibiting serious bugs in testing and is not of critical benefit, it is more likely to be descoped from a release.




We may think of value in terms of expected benefits minus residual risk after an amount of testing.

1.5 Value-Inspired Testing
Risk is relevant at all levels of testing, but the risks differ by-level. The diagram below illustrates several principles:
          all the way through the lifecycle, different risks accumulate;
          the quality information a test provides depends on comparison of software’s behaviour with the test model,
          the development mode (verification testing) and also real-world desired behaviour (validation testing).




                                                                                                                              6
Although this is shown in the format of a V-model, it is not necessarily advocating “the” V-model in its traditional
sense. I argue that all lifecycles have some kind of levels of stakeholders & participants, levels of specifications / other
oracles and levels of integration of the developed system. Iterative lifecycles can be considered as repeatedly
descending then ascending through some or all of these levels in various ways.

Looking at this in more detail: requirements are necessarily a simplification of the way the software will behave in use;
no requirements can be perfect. When functional and non-functional specifications are written, there are risks of
distorting / omitting requirements, or adding functionality that is not really wanted. And so on through design and
coding – all of these are different risks with their own set of risk factors (each with their probability and consequence
components). This chain (or rather, network) of risks corresponds to the various definitions of mistake, defect, fault,
failure etc.




To manage these various risks, we need a variety of techniques. The traditional view is that the earlier we mitigate
risks, the less the knock-on effect (diagram below), although in agile methods some more tactical risk management is
used, eg making some decisions as late as possible, allowing technical debt to build, then refactoring at suitable times.




Looking more closely at validation: it includes all the decisions that cannot be made by simply “checking” behaviour
against a specification:
                                                                                                                           7
Even if good specifications exist, are they 100% up to date? Are they still what is wanted, or is a change
         request needed?
         No specification is perfectly detailed or specifies every possible thing which the software should do and
         should not do (expressable as risks), therefore some behaviour will be implicit / assumed, and judgement will
         be needed;
         in some contexts, traditional specifications may not exist at all;
         testers may therefore need “oracles” other than specifications – for example:
              o consistency with product /system purpose, history, image, claims, comparable products/systems etc
              o familiar failure patterns.

So in summary, risk-related principles apply throughout testing, from reviews to test specification through execution
to retesting, regression testing, go-live and beyond.

1.6 Value Flow ScoreCards
Now, how can we manage risk throughout the system development lifecycle and throughout testing? I propose in this
paper a framework to do this, but in order to get there, for a few moments let us a step back from risk.

Through the lifecycle
In the introduction I suggested we think of testing as value flow management. One approach to this is to start with the
concept of a balanced scorecard. On the left half of the diagram below is a version of Kaplan & Norton’s original. On
the right side is a modified version, tailored for software quality after a variety of authors.




The basic principle is that for each different view of quality, we may set a structure of objectives, measures, targets
and initiatives. Kaplan & Norton’s original purpose was “translating strategy into action”. In IT project terms, we may
ask:
          what are our objectives? (for example, we may want to adhere to a particular process standard, or achieve a
          certain degree of product quality, or a degree of customer satisfaction;
          by what measures will we gauge success – in colloquial terms, “what does good look like?”;
          what targets shall we set for a particular stage, eg the next software release? This could be in terms of bug
          frequencies and severities after go-live, but measures and targets need not be quantitative, for example
          rubrics could be used for customer satisfaction surveys.
          Then what initiatives shall we take to make this happen?

Four of the quality viewpoints may be thought of as applying to the current project; the fifth is about improvement,
for future projects.

                                                                                                                          8
In the following diagram, I develop this structure to fit conveniently within the software lifecycle. First I add two more
viewpoints, supplier and infrastructure. Then I arrange the viewpoints in a kind of “value flow unit”.




To use this practically, the scorecard becomes table of seven columns and four rows. There is a rough logical flow from
left to right. In earlier papers I have outlined several applications in and around testing, but there is not space here to
describe those.




In the following diagram (next page), I illustrate how the value flow items which can be defined for an individual team
or role can be cascaded to control value flow through the whole lifecycle, both down and up the levels and from left
to right (corresponding to static then dynamic testing).




                                                                                                                         9
Integrating risk
Now, we are ready to integrate risk into the scorecard. Risks may be seen as threats to the success of the objectives
for each view of quality, so we can insert a new row between objectives and the way we measure, target and define
the way forward. When we know the risks, we can build in appropriate management measures and tactics.




Next, let’s look at different types of risk. Many authors distinguish:
          product risks, ie threats to the quality of software; from
          project risks, ie threats to the conduct of project activities.

Some authors also distinguish a third type, process risk, which is a kind of specialism of project risk connected with
methodology.

The following diagram (next page) illustrates these, some examples, and relationships between the risks.




                                                                                                                         10
Finally, we can now be more specific about the risks in the scorecard – because there is a strong correlation between
the quality viewpoints and the risk types.




So to summarise up to now: we have arrived at a structure for setting out, balancing and measuring the full range of
quality viewpoints, and for associating with them the risks which threaten. This is a complete, integrated quality and
risk management framework. To continue the renovation, future work should now build together, using this
framework:

        a more holistic context-driven approach to risk, putting together the “two halves” of test design and test
        management and refining guidance on how to mix and match methods and techniques from the fragmented
        variety on offer;
        firming up into practical advice how to balance benefits against risks; and
        clarifying how risk management activities can be pragmatically controlled throughout the software lifecycle
        and throughout the testing process.

The challenge is to achieve an appropriate balance between a robust approach which is too complex, and an
achievable approach which is too simplistic to be useful; this balance varies of course with context.


                                                                                                                     11
Now to move towards the second half of this paper, which focuses on the rightmost column of the Value Flow
ScoreCard, ie improvement for future projects.




The above diagram illustrates the relationship between the Value Flow ScoreCard and a “toolbox” structure I
developed recently to fit around it, to embrace scientific thinking and a structure for thinking about innovation.



2. Innovating in testing, using Emergence concepts
This toolbox structure is not a primary focus of this paper, but just to position the risk renovation and testing-
innovation parts of this paper within that structure for reference:




This second part of this paper moves to consider innovation in testing, via analogies with how innovation occurs in
nature.




                                                                                                                      12
2.1 Evolution in Nature
The outer layer of the toolbox consists of this triangle:




There is evidence that innovation in nature includes a phenomenon called emergence, which is associated with the
concepts of systems thinking and complexity theory. One way of looking at emergence is to see how different sciences
build progressively on top of each other, according to scale:




When human society is established, the resulting further innovation no longer depends on scale but becomes
explosive in its information content.

The explosion of human innovation is shown in more detail in the diagram on the next page (which also takes the
opportunity to invert the image to a more satisfying view).




                                                                                                                  13
The reference to Kurzweil epochs may not be appreciated by all readers. This is a rather extreme view of how
explosive human innovation may continue in the surprisingly-near future. Many people are very sceptical of these
predictions, but I would argue that bearing in mind the effects of Moore’s Law and the exponential innovation we
have seen in recent years, even if progress is not as fast as Kurzweil expects, software is headed for some big new
territory, and testing should be ready to boldly go there.

Biology
Leaving aside the particular technicalities of physics and chemistry, the most obvious part of the evolutionary saga is
the biological.




A way of appreciating evolution (admittedly not shared by everyone) is to consider it in two related dimensions:
        over time, diversity has increased (though not regularly, as we will see); and
        also, broadly, the sophistication of organisms has increased (with humankind being a spectacular recent
        example).

This concept is illustrated in the following diagram (next page).



                                                                                                                      14
But it seems that evolution has not been smooth. Instead, there seem to be long periods of relative stability,
interrupted by sudden upheavals such as mass extinctions or explosions of new species:




It is outside the scope of this paper to go into details, but there are examples in other sciences (eg physics, chemistry)
of sudden emergences, eg those transformations known as phase changes.

The diagram on the next page illustrates this idea. The point of mentioning this in a paper about software testing is
that many people (including myself) see this kind of behaviour as a universal phenomenon. We could, and maybe
should, learn from it.




                                                                                                                        15
Relationship with other sciences




The theory of such sudden advances was likened by Per Bak to the avalanches that occur unpredictably when a pile of
sand is continually added to from above – suddenly a stable or metastable state gives way to widespread change.

2.2 Evolution of Software Testing

The view ahead
Again you may ask: what has this to do with software testing? Well, if you accept the idea of software testing as a
social science, you should be aware that social sciences (much of human history) is, like other sciences, subject to
punctuated equilibria. Another way of looking at the (Per Bak) avalanches is in terms of Gladwell’s “tipping points”.




Software testing has admittedly failed to keep up with advances in IT generally, and there are various ways out of this
situation. It could, as some have claimed, “die” – but what would that do for the quality of life of all those people who
depend on software? I would prefer to see us rise to the challenge, and help make the world not only a more complex
place but really a better place.




                                                                                                                        16
As IT has innovated explosively, it is worth the testing discipline taking a look ahead. For example, are we ready to test
artificial intelligence? (admittedly some lower forms of AI have been around and in use for a while, but when did you
last hear about them at a testing conference?).




The story so far
The table below represents my extrapolation of Gelperin & Hetzel’s historical analysis plus my recent interpretation of
the “schools of software testing” situation.




But what can my proposed analogies with science and nature contribute to this picture?




                                                                                                                       17
2.3 Genes to Memes
One way of understanding the explosive transition from slow biological evolution to rapid human cultural evolution is
to consider replicating units of human knowledge and habits as analogous to the genes of DNA. These cultural units
were named “memes” by Richard Dawkins, and many authors since have argued about the accuracy and usefulness
(or not) of this analogy. The illustration immediately below is of genes as media of biological evolution.




The next diagram illustrates the analogy with memes. Memes are not so well-defined, but like genes they replicate
(though not as precisely) and they mutate (more often and more extravagantly?).




2.4 Memeplexes in the History of Testing
I am not the first author to claim a role for memes in software testing; the idea is already widespread on the internet.
But in the meme literature there is a concept termed a “memeplex” – being a collection of related and readily-
coexisting memes. it seems to me that memeplexes are a useful concept to understand software development
ecosystems and schools of software testing.

Below (next page) are two examples of what might be called software testing memeplexes.


                                                                                                                      18
The first is an old attempt by myself to represent what was then known as “software testing best practice”:




The second is an entirely different representation (though also by myself) – and this attempts to represent the
antithesis of software testing “best practices”, namely a context-driven thought structure:




So, do memeplexes really help in understanding the evolution of software testing overall? I think they do, but even
more illuminating I believe are the ideas of platforms, cranes and tipping points. A memeplex codifies an ecosystem
which has become established on a platform. The driving forces are arguably:
         what are the cranes that get us to a new level, and the tipping points that make that lift respectable and
         respected?
         is this a single stream of evolution or are there multiple streams?

In the following diagram I take the Gelperin-Hetzel-based view of software testing history and attempt to express it in
the language of platforms, cranes and tipping points.




                                                                                                                      19
And another worry... here is a different view of the history (so far) of software testing.




Over the most recent few years, has innovation really almost stopped, or is there another explanation?

The diagram below (nest page) shows a different view of testing innovation: cause-effect-chained rather than mere
reportage. The bullet points on the right of the picture are closely related to the material I am about to present
regarding innovation. But how do those factors and aids really operate?




                                                                                                                     20
2.5 Emergence between “Too Much Chaos” and “Too Much Order”
Now here is a new perspective on the initial ideas about evolution and emergence I expressed above. There are some
suggestions from the scientific literature that life evolves best on “the edge of chaos”:




2.6 Innovation and Ideas for Testing
A way of looking at testing (bearing in mind things I have said above) is to consider that it is part of an ecosystem with
development, but it lags slightly behind (or far behind, depending on your experience / opinion).

Development continually carves a path towards the “chaotic” end of the spectrum, because of market forces and the
typical personality mixes and cultures of programming groups. Conversely, testing tries to keep in step but is drawn
towards the “ordered” end of the spectrum by its typical tester psyche and the conservatism and risk-aversion of its
management.

I have tried to project the suspected tipping points I described above (psychology to method, method to art, art to
engineering etc) onto a swerving path between too much chaos and too much order.
                                                                                                                        21
There is communication between development and testing/quality disciplines, though development is in the lead.

In the platforms, cranes and tipping points illustration a few pages above, I questioned whether anything was wrong
with that picture. Hmm... I think there may be. My perception is that there have been essentially “two cultures” at
work here so far, not understanding each other well enough (see CP Snow, 1956, 1959 etc). The idea of “schools” of
software testing was introduced and publicised as part of the foundation of the Context-Driven School.

I suggest that, rather like testing lagging behind development, traditional testing has been lagging behind context-
driven. But I think that is at least partly due to the client business communities in finance and other traditional
markets having lagged behind the more modern business sectors. The main point however is that the two factions do
not communicate enough – more often they do not understand each other, agree to differ, or argue violently and
non-productively.




So, have I any suggestions to address this concern? Well, maybe...

Author Steven Johnson tells numerous stories of creativity and other innovation in some areas of commonality he has
identified (see diagram next page).


                                                                                                                  22
Johnson’s innovations are expressed as seven themes, introduced by the reef-city-web” concepts and wrapped up by
a survey of most significant human inventions in recent centuries.




The next diagram shows the specific innovation facilitators that aid innovation from platform to platform.




                                                                                                               23
The conclusion of the book is that over recent centuries the pattern of innovative environments has changed
markedly (as illustrated below).




So, what are the lessons for software testing for all this? The table below gives some examples.




                                                                                                              24
3. VIVVAT Value-Inspired Verification, Validation And Testing
To renovate the Latin for “long may it live” – VIVVAT a Value-Inspired evolution of Verification, Validation and Testing.
We still need all three: if we go to the trouble of writing specifications and developing them from higher-level
documents, we need verification. And In this increasingly agile world, we need validation more and more. Testing
suffers from a “two cultures” difficulty, but I hope that science can turn out to be a unifying factor to enable us all to
work most effectively in our various contexts.




                                                                                                                        25
References and Acknowledgements
The sources below have been the primary inputs to this work. This is not a full bibliography, and may be expanded in
future versions of this paper.




I am particularly grateful to colleagues with whom some of these ideas have been developed, both within and outside
client project work – in particular:
          Chris Comey of Testing Solutions Group, whose structure for risk-based testing made a useful and
          complementary counterpart to the method which Paul Gerrard and I published in the 2002 book Risk-Based
          E-Business Testing.
          The Software Testing Retreat – a small informal semi-regular gathering started in the UK by EuroSTAR
          regulars. In recent years this has grown to include some international friends. The original stimulus for the
          Value Flow ScoreCards idea came from Mike Smith who was interested in testing’s role in IT projects’
          “governance”, and the governance of testing itself. Isabel Evans was a major inspiration for my subsequent
          scorecard ideas which integrated well with her views of quality. My joint presentation with Mike Smith
          “Holistic Test Analysis & Design” at STARWest 2007 laid the foundations for the ScoreCard idea.
          Stuart Reid has published material on Risk-Based Testing and on innovation in software testing which
          contains some similar messages to those in this paper, and to which I have referred:
              o The Five Major Challenges to Risk-Based Testing; and
              o Lines of Innovation in Software Testing;
          Scott Barber blogged some persuasive material in response to the “testing is dead” blogs, and now has a
          scheme of mission-driven measurements which are aligned to value and risk (similar themes to this paper);
          and
          thanks to the Association for Software Testing, its members and the authors and teachers of the Black Box
          Software Testing series of courses, with whom I have had many fruitful conversations. These have given me a
          deeper insight into the principles and practices of the Context-Driven school of testing, and how those may
          be used (where context demands) to more thoughtfully interpret and selectively apply various testing
          methodologies of various degrees of formality and ceremony.




EuroSTAR 2012 T6 Neil Thompson Value-Inspired Testing v1_0.docx




                                                                                                                     26

More Related Content

Viewers also liked

Guia de referência Splunk
Guia de referência SplunkGuia de referência Splunk
Guia de referência Splunk
Splunk
 
Embarazo precoz
Embarazo precozEmbarazo precoz
Embarazo precoz
Merlehty Corona de Angulo
 
Course delivery
Course deliveryCourse delivery
Course delivery
godbeyb
 
WebHR Presentation
WebHR PresentationWebHR Presentation
WebHR Presentation
memonnoor
 
Presentation
PresentationPresentation
Presentation
Kiran Ghosh
 
News Cover
News CoverNews Cover
Marketing Theories for Bangladesh
Marketing Theories for BangladeshMarketing Theories for Bangladesh
Marketing Theories for Bangladesh
Dr Nahin Mamun
 
Streaming architecture zx_dec2015
Streaming architecture zx_dec2015Streaming architecture zx_dec2015
Streaming architecture zx_dec2015
Zhenzhong Xu
 
33d Infantry Brigade Crosswire Issue 3
33d Infantry Brigade Crosswire Issue 333d Infantry Brigade Crosswire Issue 3
33d Infantry Brigade Crosswire Issue 3
33rdibctpao
 
ASUG (Ontario SAP User Group) - Industry & Academic Partnership ROI v final
ASUG (Ontario SAP User Group) - Industry & Academic Partnership ROI v finalASUG (Ontario SAP User Group) - Industry & Academic Partnership ROI v final
ASUG (Ontario SAP User Group) - Industry & Academic Partnership ROI v final
Michael Sparling
 
Adottare uno spazio di e collaboration
Adottare uno spazio di e collaborationAdottare uno spazio di e collaboration
Adottare uno spazio di e collaborationexlab
 
Profil MonaVie
Profil MonaVieProfil MonaVie
Profil MonaVie
Impian Hari
 
Isn power point copy
Isn power point   copyIsn power point   copy
Isn power point copy
penopoly
 
Bipedestación
BipedestaciónBipedestación
Bipedestación
Alejandro Ch
 
Optika geometri
Optika geometriOptika geometri
Optika geometri
supri Yono
 

Viewers also liked (16)

Guia de referência Splunk
Guia de referência SplunkGuia de referência Splunk
Guia de referência Splunk
 
Embarazo precoz
Embarazo precozEmbarazo precoz
Embarazo precoz
 
Course delivery
Course deliveryCourse delivery
Course delivery
 
WebHR Presentation
WebHR PresentationWebHR Presentation
WebHR Presentation
 
Presentation
PresentationPresentation
Presentation
 
News Cover
News CoverNews Cover
News Cover
 
Marketing Theories for Bangladesh
Marketing Theories for BangladeshMarketing Theories for Bangladesh
Marketing Theories for Bangladesh
 
Streaming architecture zx_dec2015
Streaming architecture zx_dec2015Streaming architecture zx_dec2015
Streaming architecture zx_dec2015
 
33d Infantry Brigade Crosswire Issue 3
33d Infantry Brigade Crosswire Issue 333d Infantry Brigade Crosswire Issue 3
33d Infantry Brigade Crosswire Issue 3
 
ASUG (Ontario SAP User Group) - Industry & Academic Partnership ROI v final
ASUG (Ontario SAP User Group) - Industry & Academic Partnership ROI v finalASUG (Ontario SAP User Group) - Industry & Academic Partnership ROI v final
ASUG (Ontario SAP User Group) - Industry & Academic Partnership ROI v final
 
Adottare uno spazio di e collaboration
Adottare uno spazio di e collaborationAdottare uno spazio di e collaboration
Adottare uno spazio di e collaboration
 
Profil MonaVie
Profil MonaVieProfil MonaVie
Profil MonaVie
 
Isn power point copy
Isn power point   copyIsn power point   copy
Isn power point copy
 
Bipedestación
BipedestaciónBipedestación
Bipedestación
 
Bab v
Bab vBab v
Bab v
 
Optika geometri
Optika geometriOptika geometri
Optika geometri
 

Similar to Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

What's the Science in Data Science? - Skipper Seabold
What's the Science in Data Science? - Skipper SeaboldWhat's the Science in Data Science? - Skipper Seabold
What's the Science in Data Science? - Skipper Seabold
PyData
 
EESS Day 1 - Justin Ludcke
EESS Day 1 - Justin LudckeEESS Day 1 - Justin Ludcke
EESS Day 1 - Justin Ludcke
NSW Environment and Planning
 
Neil Thompson - Value Inspired Testing: Renovating Risk-Based Testing and Inn...
Neil Thompson - Value Inspired Testing: Renovating Risk-Based Testing and Inn...Neil Thompson - Value Inspired Testing: Renovating Risk-Based Testing and Inn...
Neil Thompson - Value Inspired Testing: Renovating Risk-Based Testing and Inn...
TEST Huddle
 
Test Axioms – An Introduction
Test Axioms – An IntroductionTest Axioms – An Introduction
Test Axioms – An Introduction
Paul Gerrard
 
Barra Presentation
Barra PresentationBarra Presentation
Barra Presentation
spgreiner
 
Exploratory Testing Explained
Exploratory Testing ExplainedExploratory Testing Explained
Exploratory Testing Explained
TechWell
 
Rapid Software Testing: Reporting
Rapid Software Testing: ReportingRapid Software Testing: Reporting
Rapid Software Testing: Reporting
TechWell
 
Guide Controlled Experiments
Guide Controlled ExperimentsGuide Controlled Experiments
Guide Controlled Experiments
lucius910
 
exploratory-testing - Read only not hidden
exploratory-testing - Read only not hiddenexploratory-testing - Read only not hidden
exploratory-testing - Read only not hidden
gr0Jmmd7Q_qarobo gr0Jmmd7Q_qarobo
 
Stuart Reid - When Passion Obscures the Facts:The Case For Evidence-Based Te...
Stuart Reid  - When Passion Obscures the Facts:The Case For Evidence-Based Te...Stuart Reid  - When Passion Obscures the Facts:The Case For Evidence-Based Te...
Stuart Reid - When Passion Obscures the Facts:The Case For Evidence-Based Te...
TEST Huddle
 
Rapid Software Testing: Reporting
Rapid Software Testing: ReportingRapid Software Testing: Reporting
Rapid Software Testing: Reporting
TechWell
 
TestCon2018 - Next Generation Testing in the Age of Machines
TestCon2018 - Next Generation Testing in the Age of MachinesTestCon2018 - Next Generation Testing in the Age of Machines
TestCon2018 - Next Generation Testing in the Age of Machines
Berk Dülger
 
Tema 4. Diseño experimental para un factor
Tema 4. Diseño experimental para un factor Tema 4. Diseño experimental para un factor
Tema 4. Diseño experimental para un factor
KarimeBalderas3
 
Model Risk Management : Best Practices
Model Risk Management : Best PracticesModel Risk Management : Best Practices
Model Risk Management : Best Practices
QuantUniversity
 
Exploratory Testing Explained and Experienced
Exploratory Testing Explained and ExperiencedExploratory Testing Explained and Experienced
Exploratory Testing Explained and Experienced
Maaret Pyhäjärvi
 
Applying Psychology To The Estimation of QA
Applying Psychology To The Estimation of QAApplying Psychology To The Estimation of QA
Applying Psychology To The Estimation of QA
Paula Heenan
 
Exploratory testing
Exploratory testingExploratory testing
Exploratory testing
Huib Schoots
 
Testing in modern times a story about quality and value - agile testing dev ...
Testing in modern times  a story about quality and value - agile testing dev ...Testing in modern times  a story about quality and value - agile testing dev ...
Testing in modern times a story about quality and value - agile testing dev ...
Huib Schoots
 
Validity and Reliability of the Research Instrument; How to Test the Validati...
Validity and Reliability of the Research Instrument; How to Test the Validati...Validity and Reliability of the Research Instrument; How to Test the Validati...
Validity and Reliability of the Research Instrument; How to Test the Validati...
Hamed Taherdoost
 
Accelerated Stress Testing
Accelerated Stress TestingAccelerated Stress Testing
Accelerated Stress Testing
Hilaire (Ananda) Perera P.Eng.
 

Similar to Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper) (20)

What's the Science in Data Science? - Skipper Seabold
What's the Science in Data Science? - Skipper SeaboldWhat's the Science in Data Science? - Skipper Seabold
What's the Science in Data Science? - Skipper Seabold
 
EESS Day 1 - Justin Ludcke
EESS Day 1 - Justin LudckeEESS Day 1 - Justin Ludcke
EESS Day 1 - Justin Ludcke
 
Neil Thompson - Value Inspired Testing: Renovating Risk-Based Testing and Inn...
Neil Thompson - Value Inspired Testing: Renovating Risk-Based Testing and Inn...Neil Thompson - Value Inspired Testing: Renovating Risk-Based Testing and Inn...
Neil Thompson - Value Inspired Testing: Renovating Risk-Based Testing and Inn...
 
Test Axioms – An Introduction
Test Axioms – An IntroductionTest Axioms – An Introduction
Test Axioms – An Introduction
 
Barra Presentation
Barra PresentationBarra Presentation
Barra Presentation
 
Exploratory Testing Explained
Exploratory Testing ExplainedExploratory Testing Explained
Exploratory Testing Explained
 
Rapid Software Testing: Reporting
Rapid Software Testing: ReportingRapid Software Testing: Reporting
Rapid Software Testing: Reporting
 
Guide Controlled Experiments
Guide Controlled ExperimentsGuide Controlled Experiments
Guide Controlled Experiments
 
exploratory-testing - Read only not hidden
exploratory-testing - Read only not hiddenexploratory-testing - Read only not hidden
exploratory-testing - Read only not hidden
 
Stuart Reid - When Passion Obscures the Facts:The Case For Evidence-Based Te...
Stuart Reid  - When Passion Obscures the Facts:The Case For Evidence-Based Te...Stuart Reid  - When Passion Obscures the Facts:The Case For Evidence-Based Te...
Stuart Reid - When Passion Obscures the Facts:The Case For Evidence-Based Te...
 
Rapid Software Testing: Reporting
Rapid Software Testing: ReportingRapid Software Testing: Reporting
Rapid Software Testing: Reporting
 
TestCon2018 - Next Generation Testing in the Age of Machines
TestCon2018 - Next Generation Testing in the Age of MachinesTestCon2018 - Next Generation Testing in the Age of Machines
TestCon2018 - Next Generation Testing in the Age of Machines
 
Tema 4. Diseño experimental para un factor
Tema 4. Diseño experimental para un factor Tema 4. Diseño experimental para un factor
Tema 4. Diseño experimental para un factor
 
Model Risk Management : Best Practices
Model Risk Management : Best PracticesModel Risk Management : Best Practices
Model Risk Management : Best Practices
 
Exploratory Testing Explained and Experienced
Exploratory Testing Explained and ExperiencedExploratory Testing Explained and Experienced
Exploratory Testing Explained and Experienced
 
Applying Psychology To The Estimation of QA
Applying Psychology To The Estimation of QAApplying Psychology To The Estimation of QA
Applying Psychology To The Estimation of QA
 
Exploratory testing
Exploratory testingExploratory testing
Exploratory testing
 
Testing in modern times a story about quality and value - agile testing dev ...
Testing in modern times  a story about quality and value - agile testing dev ...Testing in modern times  a story about quality and value - agile testing dev ...
Testing in modern times a story about quality and value - agile testing dev ...
 
Validity and Reliability of the Research Instrument; How to Test the Validati...
Validity and Reliability of the Research Instrument; How to Test the Validati...Validity and Reliability of the Research Instrument; How to Test the Validati...
Validity and Reliability of the Research Instrument; How to Test the Validati...
 
Accelerated Stress Testing
Accelerated Stress TestingAccelerated Stress Testing
Accelerated Stress Testing
 

More from Neil Thompson

Six schools, three cultures of testing: future-proof by shifting left, down, ...
Six schools, three cultures of testing: future-proof by shifting left, down, ...Six schools, three cultures of testing: future-proof by shifting left, down, ...
Six schools, three cultures of testing: future-proof by shifting left, down, ...
Neil Thompson
 
Test Data, Information, Knowledge, Wisdom: past, present & future of standing...
Test Data, Information, Knowledge, Wisdom: past, present & future of standing...Test Data, Information, Knowledge, Wisdom: past, present & future of standing...
Test Data, Information, Knowledge, Wisdom: past, present & future of standing...
Neil Thompson
 
From 'Fractal How' to Emergent Empowerment (2013 article)
From 'Fractal How' to Emergent Empowerment (2013 article)From 'Fractal How' to Emergent Empowerment (2013 article)
From 'Fractal How' to Emergent Empowerment (2013 article)
Neil Thompson
 
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Neil Thompson
 
Risk-Based Testing - Designing & managing the test process (2002)
Risk-Based Testing - Designing & managing the test process (2002)Risk-Based Testing - Designing & managing the test process (2002)
Risk-Based Testing - Designing & managing the test process (2002)
Neil Thompson
 
Risk and Testing (2003)
Risk and Testing (2003)Risk and Testing (2003)
Risk and Testing (2003)
Neil Thompson
 
'Best Practices' & 'Context-Driven' - Building a bridge (2003)
'Best Practices' & 'Context-Driven' - Building a bridge (2003)'Best Practices' & 'Context-Driven' - Building a bridge (2003)
'Best Practices' & 'Context-Driven' - Building a bridge (2003)
Neil Thompson
 
Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Risk Mitigation Trees - Review test handovers with stakeholders (2004)Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Neil Thompson
 
ROI at the bug factory - Goldratt & throughput (2004)
ROI at the bug factory - Goldratt & throughput (2004)ROI at the bug factory - Goldratt & throughput (2004)
ROI at the bug factory - Goldratt & throughput (2004)
Neil Thompson
 
Feedback-focussed process improvement (2006)
Feedback-focussed process improvement (2006)Feedback-focussed process improvement (2006)
Feedback-focussed process improvement (2006)
Neil Thompson
 
Thinking tools - From top motors through s'ware proc improv't to context-driv...
Thinking tools - From top motors through s'ware proc improv't to context-driv...Thinking tools - From top motors through s'ware proc improv't to context-driv...
Thinking tools - From top motors through s'ware proc improv't to context-driv...
Neil Thompson
 
Holistic Test Analysis & Design (2007)
Holistic Test Analysis & Design (2007)Holistic Test Analysis & Design (2007)
Holistic Test Analysis & Design (2007)
Neil Thompson
 
Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Value Flow ScoreCards - For better strategies, coverage & processes (2008)Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Neil Thompson
 
Value Flow Science - Fitter lifecycles from lean balanced scorecards (2011)
Value Flow Science - Fitter lifecycles from lean balanced scorecards  (2011)Value Flow Science - Fitter lifecycles from lean balanced scorecards  (2011)
Value Flow Science - Fitter lifecycles from lean balanced scorecards (2011)
Neil Thompson
 
What is Risk? - lightning talk for software testers (2011)
What is Risk? - lightning talk for software testers (2011)What is Risk? - lightning talk for software testers (2011)
What is Risk? - lightning talk for software testers (2011)
Neil Thompson
 
The Science of Software Testing - Experiments, Evolution & Emergence (2011)
The Science of Software Testing - Experiments, Evolution & Emergence (2011)The Science of Software Testing - Experiments, Evolution & Emergence (2011)
The Science of Software Testing - Experiments, Evolution & Emergence (2011)
Neil Thompson
 
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Neil Thompson
 
Testing as Value Flow Mgmt - organise your toolbox (2012)
Testing as Value Flow Mgmt - organise your toolbox (2012)Testing as Value Flow Mgmt - organise your toolbox (2012)
Testing as Value Flow Mgmt - organise your toolbox (2012)
Neil Thompson
 

More from Neil Thompson (18)

Six schools, three cultures of testing: future-proof by shifting left, down, ...
Six schools, three cultures of testing: future-proof by shifting left, down, ...Six schools, three cultures of testing: future-proof by shifting left, down, ...
Six schools, three cultures of testing: future-proof by shifting left, down, ...
 
Test Data, Information, Knowledge, Wisdom: past, present & future of standing...
Test Data, Information, Knowledge, Wisdom: past, present & future of standing...Test Data, Information, Knowledge, Wisdom: past, present & future of standing...
Test Data, Information, Knowledge, Wisdom: past, present & future of standing...
 
From 'Fractal How' to Emergent Empowerment (2013 article)
From 'Fractal How' to Emergent Empowerment (2013 article)From 'Fractal How' to Emergent Empowerment (2013 article)
From 'Fractal How' to Emergent Empowerment (2013 article)
 
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Eme...
 
Risk-Based Testing - Designing & managing the test process (2002)
Risk-Based Testing - Designing & managing the test process (2002)Risk-Based Testing - Designing & managing the test process (2002)
Risk-Based Testing - Designing & managing the test process (2002)
 
Risk and Testing (2003)
Risk and Testing (2003)Risk and Testing (2003)
Risk and Testing (2003)
 
'Best Practices' & 'Context-Driven' - Building a bridge (2003)
'Best Practices' & 'Context-Driven' - Building a bridge (2003)'Best Practices' & 'Context-Driven' - Building a bridge (2003)
'Best Practices' & 'Context-Driven' - Building a bridge (2003)
 
Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Risk Mitigation Trees - Review test handovers with stakeholders (2004)Risk Mitigation Trees - Review test handovers with stakeholders (2004)
Risk Mitigation Trees - Review test handovers with stakeholders (2004)
 
ROI at the bug factory - Goldratt & throughput (2004)
ROI at the bug factory - Goldratt & throughput (2004)ROI at the bug factory - Goldratt & throughput (2004)
ROI at the bug factory - Goldratt & throughput (2004)
 
Feedback-focussed process improvement (2006)
Feedback-focussed process improvement (2006)Feedback-focussed process improvement (2006)
Feedback-focussed process improvement (2006)
 
Thinking tools - From top motors through s'ware proc improv't to context-driv...
Thinking tools - From top motors through s'ware proc improv't to context-driv...Thinking tools - From top motors through s'ware proc improv't to context-driv...
Thinking tools - From top motors through s'ware proc improv't to context-driv...
 
Holistic Test Analysis & Design (2007)
Holistic Test Analysis & Design (2007)Holistic Test Analysis & Design (2007)
Holistic Test Analysis & Design (2007)
 
Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Value Flow ScoreCards - For better strategies, coverage & processes (2008)Value Flow ScoreCards - For better strategies, coverage & processes (2008)
Value Flow ScoreCards - For better strategies, coverage & processes (2008)
 
Value Flow Science - Fitter lifecycles from lean balanced scorecards (2011)
Value Flow Science - Fitter lifecycles from lean balanced scorecards  (2011)Value Flow Science - Fitter lifecycles from lean balanced scorecards  (2011)
Value Flow Science - Fitter lifecycles from lean balanced scorecards (2011)
 
What is Risk? - lightning talk for software testers (2011)
What is Risk? - lightning talk for software testers (2011)What is Risk? - lightning talk for software testers (2011)
What is Risk? - lightning talk for software testers (2011)
 
The Science of Software Testing - Experiments, Evolution & Emergence (2011)
The Science of Software Testing - Experiments, Evolution & Emergence (2011)The Science of Software Testing - Experiments, Evolution & Emergence (2011)
The Science of Software Testing - Experiments, Evolution & Emergence (2011)
 
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
Memes & Fitness Landscapes - analogies of testing with sci evol (2011)
 
Testing as Value Flow Mgmt - organise your toolbox (2012)
Testing as Value Flow Mgmt - organise your toolbox (2012)Testing as Value Flow Mgmt - organise your toolbox (2012)
Testing as Value Flow Mgmt - organise your toolbox (2012)
 

Recently uploaded

Fueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte WebinarFueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte Webinar
Zilliz
 
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptxNordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptx
MichaelKnudsen27
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdf
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfHow to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdf
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdf
Chart Kalyan
 
Mariano G Tinti - Decoding SpaceX
Mariano G Tinti - Decoding SpaceXMariano G Tinti - Decoding SpaceX
Mariano G Tinti - Decoding SpaceX
Mariano Tinti
 
Introduction of Cybersecurity with OSS at Code Europe 2024
Introduction of Cybersecurity with OSS  at Code Europe 2024Introduction of Cybersecurity with OSS  at Code Europe 2024
Introduction of Cybersecurity with OSS at Code Europe 2024
Hiroshi SHIBATA
 
Generating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and MilvusGenerating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and Milvus
Zilliz
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
panagenda
 
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Jeffrey Haguewood
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
Matthew Sinclair
 
GenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizationsGenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizations
kumardaparthi1024
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
tolgahangng
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
Brandon Minnick, MBA
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
akankshawande
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Speck&Tech
 

Recently uploaded (20)

Fueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte WebinarFueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte Webinar
 
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptxNordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptx
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdf
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfHow to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdf
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdf
 
Mariano G Tinti - Decoding SpaceX
Mariano G Tinti - Decoding SpaceXMariano G Tinti - Decoding SpaceX
Mariano G Tinti - Decoding SpaceX
 
Introduction of Cybersecurity with OSS at Code Europe 2024
Introduction of Cybersecurity with OSS  at Code Europe 2024Introduction of Cybersecurity with OSS  at Code Europe 2024
Introduction of Cybersecurity with OSS at Code Europe 2024
 
Generating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and MilvusGenerating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and Milvus
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
 
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
 
GenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizationsGenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizations
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
 

Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

  • 1. Value-Inspired Testing: Renovating Risk-Based Testing, and Innovating with Emergence v1.0 Neil Thompson NeilT@TiSCL.com @neilttweet @neiltskype Thompson information Systems Consulting Ltd www.TiSCL.com +44 (0)7000 NeilTh (634584) 23 Oast House Crescent, Farnham, Surrey, GU9 0NP England, UK Abstract Is testing “dead”? Some parts are declining, but evolution can inspire survival. To renovate use of risk; collate current variants, eg “Risk-Based”, “Risk-Driven”; use a context-driven mix of principles; prioritise testing from high to low (not zero); consider value as benefits minus risks; remember risk applies throughout testing, from static testing through execution, bug- fixing and beyond; integrate risk into Value Flow ScoreCards to manage across complementary views of quality. To innovate: consider evolution in nature: periods of ecosystems in stability, punctuated by innovative disturbances; as genes evolve in biology, “memes” evolve in thinking; testing’s history suggests some specific memeplexes; natural innovation seems to emerge on a path between “excess order” and “excess chaos”; could testing evolve similarly? Try Johnson’s “Where good ideas come from”. So: VIVVAT Value-Inspired Verification, Validation And Testing! Please join me in exploring our future. 1
  • 2. Contents Abstract ............................................................................................................................................................................... 1 0. Introduction .................................................................................................................................................................... 3 1. Renovating the use of Risk in testing .............................................................................................................................. 4 1.1 Current variants of Risk-Based Testing etc ................................................................................................................ 4 1.2 Context-driven mix of available principles ................................................................................................................ 5 1.3 Risk-Graded Testing .................................................................................................................................................. 5 1.4 Value-Graded Testing ................................................................................................................................................ 6 1.5 Value-Inspired Testing............................................................................................................................................... 6 1.6 Value Flow ScoreCards .............................................................................................................................................. 8 Through the lifecycle ................................................................................................................................................... 8 Integrating risk .......................................................................................................................................................... 10 2. Innovating in testing, using Emergence concepts ......................................................................................................... 12 2.1 Evolution in Nature ................................................................................................................................................. 13 Biology....................................................................................................................................................................... 14 Relationship with other sciences .............................................................................................................................. 16 2.2 Evolution of Software Testing ................................................................................................................................. 16 The view ahead ......................................................................................................................................................... 16 The story so far ......................................................................................................................................................... 17 2.3 Genes to Memes ..................................................................................................................................................... 18 2.4 Memeplexes in the History of Testing .................................................................................................................... 18 2.5 Emergence between “Too Much Chaos” and “Too Much Order” .......................................................................... 21 2.6 Innovation and Ideas for Testing ............................................................................................................................. 21 3. VIVVAT Value-Inspired Verification, Validation And Testing ........................................................................................ 25 References and Acknowledgements ................................................................................................................................. 26 2
  • 3. 0. Introduction The theme of this EuroSTAR 2012 conference is "Innovate & Renovate: Evolving Testing". In his call for submissions, programme chair Zeger van Hese included a quotation from William Edwards Deming: "Learning is not compulsory... neither is survival." This is presumably a veiled threat – if we don’t learn, we may not survive. But is it already too late? Several speakers have recently alleged that testing is dead, or some very similar message: Tim Rosenblatt (Cloudspace blog 22 Jun 2011) “Testing Is Dead – A Continuous Integration Story For Business People”; James Whittaker (STARWest 05 Oct 2011) “All That Testing Is Getting In The Way Of Quality”; and Alberto Savoia (Google Test Automation Conference 26 Oct 2011) “Test Is Dead”. There may be others. But I suggest that at least some of these commentators seem to be talking mainly about “the testing phase”, with an emphasis on functional testing, “independent” of the developers. They mean in particular purveyors of standard, manual testing, which is increasingly offshored or automated. No-one seems to think that performance, security or privacy testing is dead. No-one seems to be suggesting that developers have stopped testing and so should everyone else. It is more a question of who and how. So in this paper, when I talk about testing, I mean all of testing. I include: not just dynamic testing (executing software), but various kind of static testing, eg reviews; not just functional testing but all the non-functional (or para-functional) types – and this list itself may evolve. I consider what we can learn from the history of testing and its place in the “ecosystems” of IT products and projects. Testing has been called many things: an art, a craft, and more recently some people (including myself) have been trying to make it more of a science – even if that means it is a social science (as Cem Kaner argues). I think of testing as “value flow management” – we should be facilitating / assisting / monitoring / measuring / improving / optimising (according to your taste, context and role) the flow of value all the way from ideas in people’s heads (initial requirements) through to not only implemented but also service-managed, supported and maintained systems and services, in their human context. To do this, in today’s environment of increasingly-rapid, innovative and pervasive change, we do need to renovate and innovate. When holistic and evolving, testing will not die (and must not be allowed to die). I choose to focus on: renovating the increasingly fragmented and apparently-neglected subject of risk-based testing; and using analogies from science and evolution to inspire ideas for innovation in testing generally. 3
  • 4. 1. Renovating the use of Risk in testing 1.1 Current variants of Risk-Based Testing etc The first step in renovation is to collate what variants of “Risk-Based Testing” (or related terms) are around, and how we arrived at this situation. The diagram below shows a simplified flow over time, from left to right. The early books by Hetzel, Myers and Beizer all contained some notions of testing as depending on principles of risk, but this was mostly implicit. Then in the later 1980s and through the 1990s, basing testing on risk became explicit as statement of theory. But you wait ages for guidance on how to practically do risk-based testing, then in 2002 three books came along at once! Paul Gerrard, drawing on the earlier work of James Bach and others, published Risk-Based E-Business Testing, the theme of which was imagining what could go wrong with a system, then designing tests to address those risks. I was co-author of that book. Craig & Jaskiel described in Systematic Software Testing a somewhat different view of risk-based testing, which prioritised software features and attributes according to risk (its current version is called risk-driven testing, and has no doubt evolved since then); Kaner, Bach & Pettichord published Lessons Learned in Software Testing, which included context-driven versions of both of the above variants, but distinguished them as risk-based test design and risk-based test management respectively. Since then, I have seen a variety of approaches, published in books, papers or as proprietary methods. I meet many people who tell me they know what risk-based testing is, it’s quite easy to do, and it’s “not that stuff over there, that’s not risk-based testing”. I think these are all useful to some degree, but I believe they are all partial views (either focussing on the prioritisation side or on the risks-as-test-entities side), some seem to be too prescriptive / too simplistic / too complex, and I do not believe that risk-based testing is easy. Not good risk-based testing, anyway. The field seems to be fragmented; and it no longer seems to receive the attention it used to. Fashion has moved on to other subjects. Are some people just paying lip-service to risk-based testing? How many people are doing it well? How does it relate to /merge into safety-critical methods? In 2007 I integrated the two main aspects of risk-based testing into my Holistic Test Analysis & Design method, but that is only part of the story (and does not yet have tool support). I think it is time for a broad re-appraisal of the whole subject – away from one-size-fits-all, to be more inclusive of various approaches, more responsive to context. 4
  • 5. 1.2 Context-driven mix of available principles I would like to see more cross-fertilisation and unification between the “upper and lower halves”, sometimes called risk-based test management and risk-based test design. On some projects these are done by different people of course, but not always. And anyhow, the two halves should fit together. One way (and it is only just one choice) is to mirror-image James Bach’s Heuristic Test Strategy Model (HTSM), as illustrated below. The lower half is borrowed straight from the HTSM, and the upper half is modified to show similar usage for prioritisation of work. I do not mean simply “do this first, then that...” – decisions need to be made on what to prioritise, and how. The message here is that we should be ready to mix and match methods and techniques from the variety available, depending on context factors. 1.3 Risk-Graded Testing One thing I feel compromises the respectability of risk-based testing in some situations is the notion that having prioritised things, we can set a cut-off threshold below which things are not tested. A better way, I believe, is to “grade” coverage and/or effort, from low (not zero) to high, according to the selected risk factors. 5
  • 6. I think “Risk-Graded Testing” might be a better term here than Risk Based Testing. One reason is that Risk-Based Testing aligns with the term Test Basis, often used to mean a document or other oracle against which tests are designed. Another reason is that it distances itself from cruder notions of prioritisation, and from cut-off thresholds. 1.4 Value-Graded Testing Taking this a step further: we should grade testing coverage / effort not only by risk, but also the varying benefits of the features being tested. There is a partial correlation, because features which have high benefits will also tend to have high business impact if they go wrong, but it is worth making the distinction because considering the benefits may generate specific test ideas and inform the selection of test techniques. Particularly In agile methods, if a feature is exhibiting serious bugs in testing and is not of critical benefit, it is more likely to be descoped from a release. We may think of value in terms of expected benefits minus residual risk after an amount of testing. 1.5 Value-Inspired Testing Risk is relevant at all levels of testing, but the risks differ by-level. The diagram below illustrates several principles: all the way through the lifecycle, different risks accumulate; the quality information a test provides depends on comparison of software’s behaviour with the test model, the development mode (verification testing) and also real-world desired behaviour (validation testing). 6
  • 7. Although this is shown in the format of a V-model, it is not necessarily advocating “the” V-model in its traditional sense. I argue that all lifecycles have some kind of levels of stakeholders & participants, levels of specifications / other oracles and levels of integration of the developed system. Iterative lifecycles can be considered as repeatedly descending then ascending through some or all of these levels in various ways. Looking at this in more detail: requirements are necessarily a simplification of the way the software will behave in use; no requirements can be perfect. When functional and non-functional specifications are written, there are risks of distorting / omitting requirements, or adding functionality that is not really wanted. And so on through design and coding – all of these are different risks with their own set of risk factors (each with their probability and consequence components). This chain (or rather, network) of risks corresponds to the various definitions of mistake, defect, fault, failure etc. To manage these various risks, we need a variety of techniques. The traditional view is that the earlier we mitigate risks, the less the knock-on effect (diagram below), although in agile methods some more tactical risk management is used, eg making some decisions as late as possible, allowing technical debt to build, then refactoring at suitable times. Looking more closely at validation: it includes all the decisions that cannot be made by simply “checking” behaviour against a specification: 7
  • 8. Even if good specifications exist, are they 100% up to date? Are they still what is wanted, or is a change request needed? No specification is perfectly detailed or specifies every possible thing which the software should do and should not do (expressable as risks), therefore some behaviour will be implicit / assumed, and judgement will be needed; in some contexts, traditional specifications may not exist at all; testers may therefore need “oracles” other than specifications – for example: o consistency with product /system purpose, history, image, claims, comparable products/systems etc o familiar failure patterns. So in summary, risk-related principles apply throughout testing, from reviews to test specification through execution to retesting, regression testing, go-live and beyond. 1.6 Value Flow ScoreCards Now, how can we manage risk throughout the system development lifecycle and throughout testing? I propose in this paper a framework to do this, but in order to get there, for a few moments let us a step back from risk. Through the lifecycle In the introduction I suggested we think of testing as value flow management. One approach to this is to start with the concept of a balanced scorecard. On the left half of the diagram below is a version of Kaplan & Norton’s original. On the right side is a modified version, tailored for software quality after a variety of authors. The basic principle is that for each different view of quality, we may set a structure of objectives, measures, targets and initiatives. Kaplan & Norton’s original purpose was “translating strategy into action”. In IT project terms, we may ask: what are our objectives? (for example, we may want to adhere to a particular process standard, or achieve a certain degree of product quality, or a degree of customer satisfaction; by what measures will we gauge success – in colloquial terms, “what does good look like?”; what targets shall we set for a particular stage, eg the next software release? This could be in terms of bug frequencies and severities after go-live, but measures and targets need not be quantitative, for example rubrics could be used for customer satisfaction surveys. Then what initiatives shall we take to make this happen? Four of the quality viewpoints may be thought of as applying to the current project; the fifth is about improvement, for future projects. 8
  • 9. In the following diagram, I develop this structure to fit conveniently within the software lifecycle. First I add two more viewpoints, supplier and infrastructure. Then I arrange the viewpoints in a kind of “value flow unit”. To use this practically, the scorecard becomes table of seven columns and four rows. There is a rough logical flow from left to right. In earlier papers I have outlined several applications in and around testing, but there is not space here to describe those. In the following diagram (next page), I illustrate how the value flow items which can be defined for an individual team or role can be cascaded to control value flow through the whole lifecycle, both down and up the levels and from left to right (corresponding to static then dynamic testing). 9
  • 10. Integrating risk Now, we are ready to integrate risk into the scorecard. Risks may be seen as threats to the success of the objectives for each view of quality, so we can insert a new row between objectives and the way we measure, target and define the way forward. When we know the risks, we can build in appropriate management measures and tactics. Next, let’s look at different types of risk. Many authors distinguish: product risks, ie threats to the quality of software; from project risks, ie threats to the conduct of project activities. Some authors also distinguish a third type, process risk, which is a kind of specialism of project risk connected with methodology. The following diagram (next page) illustrates these, some examples, and relationships between the risks. 10
  • 11. Finally, we can now be more specific about the risks in the scorecard – because there is a strong correlation between the quality viewpoints and the risk types. So to summarise up to now: we have arrived at a structure for setting out, balancing and measuring the full range of quality viewpoints, and for associating with them the risks which threaten. This is a complete, integrated quality and risk management framework. To continue the renovation, future work should now build together, using this framework: a more holistic context-driven approach to risk, putting together the “two halves” of test design and test management and refining guidance on how to mix and match methods and techniques from the fragmented variety on offer; firming up into practical advice how to balance benefits against risks; and clarifying how risk management activities can be pragmatically controlled throughout the software lifecycle and throughout the testing process. The challenge is to achieve an appropriate balance between a robust approach which is too complex, and an achievable approach which is too simplistic to be useful; this balance varies of course with context. 11
  • 12. Now to move towards the second half of this paper, which focuses on the rightmost column of the Value Flow ScoreCard, ie improvement for future projects. The above diagram illustrates the relationship between the Value Flow ScoreCard and a “toolbox” structure I developed recently to fit around it, to embrace scientific thinking and a structure for thinking about innovation. 2. Innovating in testing, using Emergence concepts This toolbox structure is not a primary focus of this paper, but just to position the risk renovation and testing- innovation parts of this paper within that structure for reference: This second part of this paper moves to consider innovation in testing, via analogies with how innovation occurs in nature. 12
  • 13. 2.1 Evolution in Nature The outer layer of the toolbox consists of this triangle: There is evidence that innovation in nature includes a phenomenon called emergence, which is associated with the concepts of systems thinking and complexity theory. One way of looking at emergence is to see how different sciences build progressively on top of each other, according to scale: When human society is established, the resulting further innovation no longer depends on scale but becomes explosive in its information content. The explosion of human innovation is shown in more detail in the diagram on the next page (which also takes the opportunity to invert the image to a more satisfying view). 13
  • 14. The reference to Kurzweil epochs may not be appreciated by all readers. This is a rather extreme view of how explosive human innovation may continue in the surprisingly-near future. Many people are very sceptical of these predictions, but I would argue that bearing in mind the effects of Moore’s Law and the exponential innovation we have seen in recent years, even if progress is not as fast as Kurzweil expects, software is headed for some big new territory, and testing should be ready to boldly go there. Biology Leaving aside the particular technicalities of physics and chemistry, the most obvious part of the evolutionary saga is the biological. A way of appreciating evolution (admittedly not shared by everyone) is to consider it in two related dimensions: over time, diversity has increased (though not regularly, as we will see); and also, broadly, the sophistication of organisms has increased (with humankind being a spectacular recent example). This concept is illustrated in the following diagram (next page). 14
  • 15. But it seems that evolution has not been smooth. Instead, there seem to be long periods of relative stability, interrupted by sudden upheavals such as mass extinctions or explosions of new species: It is outside the scope of this paper to go into details, but there are examples in other sciences (eg physics, chemistry) of sudden emergences, eg those transformations known as phase changes. The diagram on the next page illustrates this idea. The point of mentioning this in a paper about software testing is that many people (including myself) see this kind of behaviour as a universal phenomenon. We could, and maybe should, learn from it. 15
  • 16. Relationship with other sciences The theory of such sudden advances was likened by Per Bak to the avalanches that occur unpredictably when a pile of sand is continually added to from above – suddenly a stable or metastable state gives way to widespread change. 2.2 Evolution of Software Testing The view ahead Again you may ask: what has this to do with software testing? Well, if you accept the idea of software testing as a social science, you should be aware that social sciences (much of human history) is, like other sciences, subject to punctuated equilibria. Another way of looking at the (Per Bak) avalanches is in terms of Gladwell’s “tipping points”. Software testing has admittedly failed to keep up with advances in IT generally, and there are various ways out of this situation. It could, as some have claimed, “die” – but what would that do for the quality of life of all those people who depend on software? I would prefer to see us rise to the challenge, and help make the world not only a more complex place but really a better place. 16
  • 17. As IT has innovated explosively, it is worth the testing discipline taking a look ahead. For example, are we ready to test artificial intelligence? (admittedly some lower forms of AI have been around and in use for a while, but when did you last hear about them at a testing conference?). The story so far The table below represents my extrapolation of Gelperin & Hetzel’s historical analysis plus my recent interpretation of the “schools of software testing” situation. But what can my proposed analogies with science and nature contribute to this picture? 17
  • 18. 2.3 Genes to Memes One way of understanding the explosive transition from slow biological evolution to rapid human cultural evolution is to consider replicating units of human knowledge and habits as analogous to the genes of DNA. These cultural units were named “memes” by Richard Dawkins, and many authors since have argued about the accuracy and usefulness (or not) of this analogy. The illustration immediately below is of genes as media of biological evolution. The next diagram illustrates the analogy with memes. Memes are not so well-defined, but like genes they replicate (though not as precisely) and they mutate (more often and more extravagantly?). 2.4 Memeplexes in the History of Testing I am not the first author to claim a role for memes in software testing; the idea is already widespread on the internet. But in the meme literature there is a concept termed a “memeplex” – being a collection of related and readily- coexisting memes. it seems to me that memeplexes are a useful concept to understand software development ecosystems and schools of software testing. Below (next page) are two examples of what might be called software testing memeplexes. 18
  • 19. The first is an old attempt by myself to represent what was then known as “software testing best practice”: The second is an entirely different representation (though also by myself) – and this attempts to represent the antithesis of software testing “best practices”, namely a context-driven thought structure: So, do memeplexes really help in understanding the evolution of software testing overall? I think they do, but even more illuminating I believe are the ideas of platforms, cranes and tipping points. A memeplex codifies an ecosystem which has become established on a platform. The driving forces are arguably: what are the cranes that get us to a new level, and the tipping points that make that lift respectable and respected? is this a single stream of evolution or are there multiple streams? In the following diagram I take the Gelperin-Hetzel-based view of software testing history and attempt to express it in the language of platforms, cranes and tipping points. 19
  • 20. And another worry... here is a different view of the history (so far) of software testing. Over the most recent few years, has innovation really almost stopped, or is there another explanation? The diagram below (nest page) shows a different view of testing innovation: cause-effect-chained rather than mere reportage. The bullet points on the right of the picture are closely related to the material I am about to present regarding innovation. But how do those factors and aids really operate? 20
  • 21. 2.5 Emergence between “Too Much Chaos” and “Too Much Order” Now here is a new perspective on the initial ideas about evolution and emergence I expressed above. There are some suggestions from the scientific literature that life evolves best on “the edge of chaos”: 2.6 Innovation and Ideas for Testing A way of looking at testing (bearing in mind things I have said above) is to consider that it is part of an ecosystem with development, but it lags slightly behind (or far behind, depending on your experience / opinion). Development continually carves a path towards the “chaotic” end of the spectrum, because of market forces and the typical personality mixes and cultures of programming groups. Conversely, testing tries to keep in step but is drawn towards the “ordered” end of the spectrum by its typical tester psyche and the conservatism and risk-aversion of its management. I have tried to project the suspected tipping points I described above (psychology to method, method to art, art to engineering etc) onto a swerving path between too much chaos and too much order. 21
  • 22. There is communication between development and testing/quality disciplines, though development is in the lead. In the platforms, cranes and tipping points illustration a few pages above, I questioned whether anything was wrong with that picture. Hmm... I think there may be. My perception is that there have been essentially “two cultures” at work here so far, not understanding each other well enough (see CP Snow, 1956, 1959 etc). The idea of “schools” of software testing was introduced and publicised as part of the foundation of the Context-Driven School. I suggest that, rather like testing lagging behind development, traditional testing has been lagging behind context- driven. But I think that is at least partly due to the client business communities in finance and other traditional markets having lagged behind the more modern business sectors. The main point however is that the two factions do not communicate enough – more often they do not understand each other, agree to differ, or argue violently and non-productively. So, have I any suggestions to address this concern? Well, maybe... Author Steven Johnson tells numerous stories of creativity and other innovation in some areas of commonality he has identified (see diagram next page). 22
  • 23. Johnson’s innovations are expressed as seven themes, introduced by the reef-city-web” concepts and wrapped up by a survey of most significant human inventions in recent centuries. The next diagram shows the specific innovation facilitators that aid innovation from platform to platform. 23
  • 24. The conclusion of the book is that over recent centuries the pattern of innovative environments has changed markedly (as illustrated below). So, what are the lessons for software testing for all this? The table below gives some examples. 24
  • 25. 3. VIVVAT Value-Inspired Verification, Validation And Testing To renovate the Latin for “long may it live” – VIVVAT a Value-Inspired evolution of Verification, Validation and Testing. We still need all three: if we go to the trouble of writing specifications and developing them from higher-level documents, we need verification. And In this increasingly agile world, we need validation more and more. Testing suffers from a “two cultures” difficulty, but I hope that science can turn out to be a unifying factor to enable us all to work most effectively in our various contexts. 25
  • 26. References and Acknowledgements The sources below have been the primary inputs to this work. This is not a full bibliography, and may be expanded in future versions of this paper. I am particularly grateful to colleagues with whom some of these ideas have been developed, both within and outside client project work – in particular: Chris Comey of Testing Solutions Group, whose structure for risk-based testing made a useful and complementary counterpart to the method which Paul Gerrard and I published in the 2002 book Risk-Based E-Business Testing. The Software Testing Retreat – a small informal semi-regular gathering started in the UK by EuroSTAR regulars. In recent years this has grown to include some international friends. The original stimulus for the Value Flow ScoreCards idea came from Mike Smith who was interested in testing’s role in IT projects’ “governance”, and the governance of testing itself. Isabel Evans was a major inspiration for my subsequent scorecard ideas which integrated well with her views of quality. My joint presentation with Mike Smith “Holistic Test Analysis & Design” at STARWest 2007 laid the foundations for the ScoreCard idea. Stuart Reid has published material on Risk-Based Testing and on innovation in software testing which contains some similar messages to those in this paper, and to which I have referred: o The Five Major Challenges to Risk-Based Testing; and o Lines of Innovation in Software Testing; Scott Barber blogged some persuasive material in response to the “testing is dead” blogs, and now has a scheme of mission-driven measurements which are aligned to value and risk (similar themes to this paper); and thanks to the Association for Software Testing, its members and the authors and teachers of the Black Box Software Testing series of courses, with whom I have had many fruitful conversations. These have given me a deeper insight into the principles and practices of the Context-Driven school of testing, and how those may be used (where context demands) to more thoughtfully interpret and selectively apply various testing methodologies of various degrees of formality and ceremony. EuroSTAR 2012 T6 Neil Thompson Value-Inspired Testing v1_0.docx 26