Successfully reported this slideshow.
Value-Inspired Testing:    Renovating Risk-Based Testing, and       Innovating with Emergence v1.0Neil Thompson           ...
ContentsAbstract ............................................................................................................
0. IntroductionThe theme of this EuroSTAR 2012 conference is "Innovate & Renovate: Evolving Testing". In his call for subm...
1. Renovating the use of Risk in testing1.1 Current variants of Risk-Based Testing etcThe first step in renovation is to c...
1.2 Context-driven mix of available principlesI would like to see more cross-fertilisation and unification between the “up...
I think “Risk-Graded Testing” might be a better term here than Risk Based Testing. One reason is that Risk-BasedTesting al...
Although this is shown in the format of a V-model, it is not necessarily advocating “the” V-model in its traditionalsense....
Even if good specifications exist, are they 100% up to date? Are they still what is wanted, or is a change         request...
In the following diagram, I develop this structure to fit conveniently within the software lifecycle. First I add two more...
Integrating riskNow, we are ready to integrate risk into the scorecard. Risks may be seen as threats to the success of the...
Finally, we can now be more specific about the risks in the scorecard – because there is a strong correlation betweenthe q...
Now to move towards the second half of this paper, which focuses on the rightmost column of the Value FlowScoreCard, ie im...
2.1 Evolution in NatureThe outer layer of the toolbox consists of this triangle:There is evidence that innovation in natur...
The reference to Kurzweil epochs may not be appreciated by all readers. This is a rather extreme view of howexplosive huma...
But it seems that evolution has not been smooth. Instead, there seem to be long periods of relative stability,interrupted ...
Relationship with other sciencesThe theory of such sudden advances was likened by Per Bak to the avalanches that occur unp...
As IT has innovated explosively, it is worth the testing discipline taking a look ahead. For example, are we ready to test...
2.3 Genes to MemesOne way of understanding the explosive transition from slow biological evolution to rapid human cultural...
The first is an old attempt by myself to represent what was then known as “software testing best practice”:The second is a...
And another worry... here is a different view of the history (so far) of software testing.Over the most recent few years, ...
2.5 Emergence between “Too Much Chaos” and “Too Much Order”Now here is a new perspective on the initial ideas about evolut...
There is communication between development and testing/quality disciplines, though development is in the lead.In the platf...
Johnson’s innovations are expressed as seven themes, introduced by the reef-city-web” concepts and wrapped up bya survey o...
The conclusion of the book is that over recent centuries the pattern of innovative environments has changedmarkedly (as il...
3. VIVVAT Value-Inspired Verification, Validation And TestingTo renovate the Latin for “long may it live” – VIVVAT a Value...
References and AcknowledgementsThe sources below have been the primary inputs to this work. This is not a full bibliograph...
Upcoming SlideShare
Loading in …5
×

Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

569 views

Published on

EuroSTAR conference Nov 2012, Amsterdam

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

  1. 1. Value-Inspired Testing: Renovating Risk-Based Testing, and Innovating with Emergence v1.0Neil Thompson NeilT@TiSCL.com @neilttweet @neiltskypeThompson information Systems Consulting Ltd www.TiSCL.com +44 (0)7000 NeilTh (634584)23 Oast House Crescent,Farnham,Surrey,GU9 0NPEngland, UKAbstractIs testing “dead”? Some parts are declining, butevolution can inspire survival.To renovate use of risk; collate current variants, eg “Risk-Based”, “Risk-Driven”; use a context-driven mix of principles; prioritise testing from high to low (not zero); consider value as benefits minus risks; remember risk applies throughout testing, from static testing through execution, bug- fixing and beyond; integrate risk into Value Flow ScoreCards to manage across complementary views of quality.To innovate: consider evolution in nature: periods of ecosystems in stability, punctuated by innovative disturbances; as genes evolve in biology, “memes” evolve in thinking; testing’s history suggests some specific memeplexes; natural innovation seems to emerge on a path between “excess order” and “excess chaos”; could testing evolve similarly? Try Johnson’s “Where good ideas come from”.So: VIVVAT Value-Inspired Verification, ValidationAnd Testing! Please join me in exploring our future. 1
  2. 2. ContentsAbstract ............................................................................................................................................................................... 10. Introduction .................................................................................................................................................................... 31. Renovating the use of Risk in testing .............................................................................................................................. 4 1.1 Current variants of Risk-Based Testing etc ................................................................................................................ 4 1.2 Context-driven mix of available principles ................................................................................................................ 5 1.3 Risk-Graded Testing .................................................................................................................................................. 5 1.4 Value-Graded Testing ................................................................................................................................................ 6 1.5 Value-Inspired Testing............................................................................................................................................... 6 1.6 Value Flow ScoreCards .............................................................................................................................................. 8 Through the lifecycle ................................................................................................................................................... 8 Integrating risk .......................................................................................................................................................... 102. Innovating in testing, using Emergence concepts ......................................................................................................... 12 2.1 Evolution in Nature ................................................................................................................................................. 13 Biology....................................................................................................................................................................... 14 Relationship with other sciences .............................................................................................................................. 16 2.2 Evolution of Software Testing ................................................................................................................................. 16 The view ahead ......................................................................................................................................................... 16 The story so far ......................................................................................................................................................... 17 2.3 Genes to Memes ..................................................................................................................................................... 18 2.4 Memeplexes in the History of Testing .................................................................................................................... 18 2.5 Emergence between “Too Much Chaos” and “Too Much Order” .......................................................................... 21 2.6 Innovation and Ideas for Testing ............................................................................................................................. 213. VIVVAT Value-Inspired Verification, Validation And Testing ........................................................................................ 25References and Acknowledgements ................................................................................................................................. 26 2
  3. 3. 0. IntroductionThe theme of this EuroSTAR 2012 conference is "Innovate & Renovate: Evolving Testing". In his call for submissions,programme chair Zeger van Hese included a quotation from William Edwards Deming: "Learning is not compulsory...neither is survival." This is presumably a veiled threat – if we don’t learn, we may not survive. But is it already toolate? Several speakers have recently alleged that testing is dead, or some very similar message: Tim Rosenblatt (Cloudspace blog 22 Jun 2011) “Testing Is Dead – A Continuous Integration Story For Business People”; James Whittaker (STARWest 05 Oct 2011) “All That Testing Is Getting In The Way Of Quality”; and Alberto Savoia (Google Test Automation Conference 26 Oct 2011) “Test Is Dead”.There may be others. But I suggest that at least some of these commentators seem to be talking mainly about “thetesting phase”, with an emphasis on functional testing, “independent” of the developers. They mean in particularpurveyors of standard, manual testing, which is increasingly offshored or automated. No-one seems to think thatperformance, security or privacy testing is dead. No-one seems to be suggesting that developers have stopped testingand so should everyone else. It is more a question of who and how.So in this paper, when I talk about testing, I mean all of testing. I include: not just dynamic testing (executing software), but various kind of static testing, eg reviews; not just functional testing but all the non-functional (or para-functional) types – and this list itself may evolve.I consider what we can learn from the history of testing and its place in the “ecosystems” of IT products and projects.Testing has been called many things: an art, a craft, and more recently some people (including myself) have beentrying to make it more of a science – even if that means it is a social science (as Cem Kaner argues).I think of testing as “value flow management” – we should be facilitating / assisting / monitoring / measuring /improving / optimising (according to your taste, context and role) the flow of value all the way from ideas in people’sheads (initial requirements) through to not only implemented but also service-managed, supported and maintainedsystems and services, in their human context. To do this, in today’s environment of increasingly-rapid, innovative andpervasive change, we do need to renovate and innovate. When holistic and evolving, testing will not die (and must notbe allowed to die). I choose to focus on: renovating the increasingly fragmented and apparently-neglected subject of risk-based testing; and using analogies from science and evolution to inspire ideas for innovation in testing generally. 3
  4. 4. 1. Renovating the use of Risk in testing1.1 Current variants of Risk-Based Testing etcThe first step in renovation is to collate what variants of “Risk-Based Testing” (or related terms) are around, and howwe arrived at this situation. The diagram below shows a simplified flow over time, from left to right.The early books by Hetzel, Myers and Beizer all contained some notions of testing as depending on principles of risk,but this was mostly implicit. Then in the later 1980s and through the 1990s, basing testing on risk became explicit asstatement of theory. But you wait ages for guidance on how to practically do risk-based testing, then in 2002 threebooks came along at once! Paul Gerrard, drawing on the earlier work of James Bach and others, published Risk-Based E-Business Testing, the theme of which was imagining what could go wrong with a system, then designing tests to address those risks. I was co-author of that book. Craig & Jaskiel described in Systematic Software Testing a somewhat different view of risk-based testing, which prioritised software features and attributes according to risk (its current version is called risk-driven testing, and has no doubt evolved since then); Kaner, Bach & Pettichord published Lessons Learned in Software Testing, which included context-driven versions of both of the above variants, but distinguished them as risk-based test design and risk-based test management respectively.Since then, I have seen a variety of approaches, published in books, papers or as proprietary methods. I meet manypeople who tell me they know what risk-based testing is, it’s quite easy to do, and it’s “not that stuff over there, that’snot risk-based testing”. I think these are all useful to some degree, but I believe they are all partial views (eitherfocussing on the prioritisation side or on the risks-as-test-entities side), some seem to be too prescriptive / toosimplistic / too complex, and I do not believe that risk-based testing is easy. Not good risk-based testing, anyway.The field seems to be fragmented; and it no longer seems to receive the attention it used to. Fashion has moved on toother subjects. Are some people just paying lip-service to risk-based testing? How many people are doing it well? Howdoes it relate to /merge into safety-critical methods? In 2007 I integrated the two main aspects of risk-based testinginto my Holistic Test Analysis & Design method, but that is only part of the story (and does not yet have tool support).I think it is time for a broad re-appraisal of the whole subject – away from one-size-fits-all, to be more inclusive ofvarious approaches, more responsive to context. 4
  5. 5. 1.2 Context-driven mix of available principlesI would like to see more cross-fertilisation and unification between the “upper and lower halves”, sometimes calledrisk-based test management and risk-based test design. On some projects these are done by different people ofcourse, but not always. And anyhow, the two halves should fit together. One way (and it is only just one choice) is tomirror-image James Bach’s Heuristic Test Strategy Model (HTSM), as illustrated below.The lower half is borrowed straight from the HTSM, and the upper half is modified to show similar usage forprioritisation of work. I do not mean simply “do this first, then that...” – decisions need to be made on what toprioritise, and how. The message here is that we should be ready to mix and match methods and techniques from thevariety available, depending on context factors.1.3 Risk-Graded TestingOne thing I feel compromises the respectability of risk-based testing in some situations is the notion that havingprioritised things, we can set a cut-off threshold below which things are not tested. A better way, I believe, is to“grade” coverage and/or effort, from low (not zero) to high, according to the selected risk factors. 5
  6. 6. I think “Risk-Graded Testing” might be a better term here than Risk Based Testing. One reason is that Risk-BasedTesting aligns with the term Test Basis, often used to mean a document or other oracle against which tests aredesigned. Another reason is that it distances itself from cruder notions of prioritisation, and from cut-off thresholds.1.4 Value-Graded TestingTaking this a step further: we should grade testing coverage / effort not only by risk, but also the varying benefits ofthe features being tested. There is a partial correlation, because features which have high benefits will also tend tohave high business impact if they go wrong, but it is worth making the distinction because considering the benefitsmay generate specific test ideas and inform the selection of test techniques. Particularly In agile methods, if a featureis exhibiting serious bugs in testing and is not of critical benefit, it is more likely to be descoped from a release.We may think of value in terms of expected benefits minus residual risk after an amount of testing.1.5 Value-Inspired TestingRisk is relevant at all levels of testing, but the risks differ by-level. The diagram below illustrates several principles: all the way through the lifecycle, different risks accumulate; the quality information a test provides depends on comparison of software’s behaviour with the test model, the development mode (verification testing) and also real-world desired behaviour (validation testing). 6
  7. 7. Although this is shown in the format of a V-model, it is not necessarily advocating “the” V-model in its traditionalsense. I argue that all lifecycles have some kind of levels of stakeholders & participants, levels of specifications / otheroracles and levels of integration of the developed system. Iterative lifecycles can be considered as repeatedlydescending then ascending through some or all of these levels in various ways.Looking at this in more detail: requirements are necessarily a simplification of the way the software will behave in use;no requirements can be perfect. When functional and non-functional specifications are written, there are risks ofdistorting / omitting requirements, or adding functionality that is not really wanted. And so on through design andcoding – all of these are different risks with their own set of risk factors (each with their probability and consequencecomponents). This chain (or rather, network) of risks corresponds to the various definitions of mistake, defect, fault,failure etc.To manage these various risks, we need a variety of techniques. The traditional view is that the earlier we mitigaterisks, the less the knock-on effect (diagram below), although in agile methods some more tactical risk management isused, eg making some decisions as late as possible, allowing technical debt to build, then refactoring at suitable times.Looking more closely at validation: it includes all the decisions that cannot be made by simply “checking” behaviouragainst a specification: 7
  8. 8. Even if good specifications exist, are they 100% up to date? Are they still what is wanted, or is a change request needed? No specification is perfectly detailed or specifies every possible thing which the software should do and should not do (expressable as risks), therefore some behaviour will be implicit / assumed, and judgement will be needed; in some contexts, traditional specifications may not exist at all; testers may therefore need “oracles” other than specifications – for example: o consistency with product /system purpose, history, image, claims, comparable products/systems etc o familiar failure patterns.So in summary, risk-related principles apply throughout testing, from reviews to test specification through executionto retesting, regression testing, go-live and beyond.1.6 Value Flow ScoreCardsNow, how can we manage risk throughout the system development lifecycle and throughout testing? I propose in thispaper a framework to do this, but in order to get there, for a few moments let us a step back from risk.Through the lifecycleIn the introduction I suggested we think of testing as value flow management. One approach to this is to start with theconcept of a balanced scorecard. On the left half of the diagram below is a version of Kaplan & Norton’s original. Onthe right side is a modified version, tailored for software quality after a variety of authors.The basic principle is that for each different view of quality, we may set a structure of objectives, measures, targetsand initiatives. Kaplan & Norton’s original purpose was “translating strategy into action”. In IT project terms, we mayask: what are our objectives? (for example, we may want to adhere to a particular process standard, or achieve a certain degree of product quality, or a degree of customer satisfaction; by what measures will we gauge success – in colloquial terms, “what does good look like?”; what targets shall we set for a particular stage, eg the next software release? This could be in terms of bug frequencies and severities after go-live, but measures and targets need not be quantitative, for example rubrics could be used for customer satisfaction surveys. Then what initiatives shall we take to make this happen?Four of the quality viewpoints may be thought of as applying to the current project; the fifth is about improvement,for future projects. 8
  9. 9. In the following diagram, I develop this structure to fit conveniently within the software lifecycle. First I add two moreviewpoints, supplier and infrastructure. Then I arrange the viewpoints in a kind of “value flow unit”.To use this practically, the scorecard becomes table of seven columns and four rows. There is a rough logical flow fromleft to right. In earlier papers I have outlined several applications in and around testing, but there is not space here todescribe those.In the following diagram (next page), I illustrate how the value flow items which can be defined for an individual teamor role can be cascaded to control value flow through the whole lifecycle, both down and up the levels and from leftto right (corresponding to static then dynamic testing). 9
  10. 10. Integrating riskNow, we are ready to integrate risk into the scorecard. Risks may be seen as threats to the success of the objectivesfor each view of quality, so we can insert a new row between objectives and the way we measure, target and definethe way forward. When we know the risks, we can build in appropriate management measures and tactics.Next, let’s look at different types of risk. Many authors distinguish: product risks, ie threats to the quality of software; from project risks, ie threats to the conduct of project activities.Some authors also distinguish a third type, process risk, which is a kind of specialism of project risk connected withmethodology.The following diagram (next page) illustrates these, some examples, and relationships between the risks. 10
  11. 11. Finally, we can now be more specific about the risks in the scorecard – because there is a strong correlation betweenthe quality viewpoints and the risk types.So to summarise up to now: we have arrived at a structure for setting out, balancing and measuring the full range ofquality viewpoints, and for associating with them the risks which threaten. This is a complete, integrated quality andrisk management framework. To continue the renovation, future work should now build together, using thisframework: a more holistic context-driven approach to risk, putting together the “two halves” of test design and test management and refining guidance on how to mix and match methods and techniques from the fragmented variety on offer; firming up into practical advice how to balance benefits against risks; and clarifying how risk management activities can be pragmatically controlled throughout the software lifecycle and throughout the testing process.The challenge is to achieve an appropriate balance between a robust approach which is too complex, and anachievable approach which is too simplistic to be useful; this balance varies of course with context. 11
  12. 12. Now to move towards the second half of this paper, which focuses on the rightmost column of the Value FlowScoreCard, ie improvement for future projects.The above diagram illustrates the relationship between the Value Flow ScoreCard and a “toolbox” structure Ideveloped recently to fit around it, to embrace scientific thinking and a structure for thinking about innovation.2. Innovating in testing, using Emergence conceptsThis toolbox structure is not a primary focus of this paper, but just to position the risk renovation and testing-innovation parts of this paper within that structure for reference:This second part of this paper moves to consider innovation in testing, via analogies with how innovation occurs innature. 12
  13. 13. 2.1 Evolution in NatureThe outer layer of the toolbox consists of this triangle:There is evidence that innovation in nature includes a phenomenon called emergence, which is associated with theconcepts of systems thinking and complexity theory. One way of looking at emergence is to see how different sciencesbuild progressively on top of each other, according to scale:When human society is established, the resulting further innovation no longer depends on scale but becomesexplosive in its information content.The explosion of human innovation is shown in more detail in the diagram on the next page (which also takes theopportunity to invert the image to a more satisfying view). 13
  14. 14. The reference to Kurzweil epochs may not be appreciated by all readers. This is a rather extreme view of howexplosive human innovation may continue in the surprisingly-near future. Many people are very sceptical of thesepredictions, but I would argue that bearing in mind the effects of Moore’s Law and the exponential innovation wehave seen in recent years, even if progress is not as fast as Kurzweil expects, software is headed for some big newterritory, and testing should be ready to boldly go there.BiologyLeaving aside the particular technicalities of physics and chemistry, the most obvious part of the evolutionary saga isthe biological.A way of appreciating evolution (admittedly not shared by everyone) is to consider it in two related dimensions: over time, diversity has increased (though not regularly, as we will see); and also, broadly, the sophistication of organisms has increased (with humankind being a spectacular recent example).This concept is illustrated in the following diagram (next page). 14
  15. 15. But it seems that evolution has not been smooth. Instead, there seem to be long periods of relative stability,interrupted by sudden upheavals such as mass extinctions or explosions of new species:It is outside the scope of this paper to go into details, but there are examples in other sciences (eg physics, chemistry)of sudden emergences, eg those transformations known as phase changes.The diagram on the next page illustrates this idea. The point of mentioning this in a paper about software testing isthat many people (including myself) see this kind of behaviour as a universal phenomenon. We could, and maybeshould, learn from it. 15
  16. 16. Relationship with other sciencesThe theory of such sudden advances was likened by Per Bak to the avalanches that occur unpredictably when a pile ofsand is continually added to from above – suddenly a stable or metastable state gives way to widespread change.2.2 Evolution of Software TestingThe view aheadAgain you may ask: what has this to do with software testing? Well, if you accept the idea of software testing as asocial science, you should be aware that social sciences (much of human history) is, like other sciences, subject topunctuated equilibria. Another way of looking at the (Per Bak) avalanches is in terms of Gladwell’s “tipping points”.Software testing has admittedly failed to keep up with advances in IT generally, and there are various ways out of thissituation. It could, as some have claimed, “die” – but what would that do for the quality of life of all those people whodepend on software? I would prefer to see us rise to the challenge, and help make the world not only a more complexplace but really a better place. 16
  17. 17. As IT has innovated explosively, it is worth the testing discipline taking a look ahead. For example, are we ready to testartificial intelligence? (admittedly some lower forms of AI have been around and in use for a while, but when did youlast hear about them at a testing conference?).The story so farThe table below represents my extrapolation of Gelperin & Hetzel’s historical analysis plus my recent interpretation ofthe “schools of software testing” situation.But what can my proposed analogies with science and nature contribute to this picture? 17
  18. 18. 2.3 Genes to MemesOne way of understanding the explosive transition from slow biological evolution to rapid human cultural evolution isto consider replicating units of human knowledge and habits as analogous to the genes of DNA. These cultural unitswere named “memes” by Richard Dawkins, and many authors since have argued about the accuracy and usefulness(or not) of this analogy. The illustration immediately below is of genes as media of biological evolution.The next diagram illustrates the analogy with memes. Memes are not so well-defined, but like genes they replicate(though not as precisely) and they mutate (more often and more extravagantly?).2.4 Memeplexes in the History of TestingI am not the first author to claim a role for memes in software testing; the idea is already widespread on the internet.But in the meme literature there is a concept termed a “memeplex” – being a collection of related and readily-coexisting memes. it seems to me that memeplexes are a useful concept to understand software developmentecosystems and schools of software testing.Below (next page) are two examples of what might be called software testing memeplexes. 18
  19. 19. The first is an old attempt by myself to represent what was then known as “software testing best practice”:The second is an entirely different representation (though also by myself) – and this attempts to represent theantithesis of software testing “best practices”, namely a context-driven thought structure:So, do memeplexes really help in understanding the evolution of software testing overall? I think they do, but evenmore illuminating I believe are the ideas of platforms, cranes and tipping points. A memeplex codifies an ecosystemwhich has become established on a platform. The driving forces are arguably: what are the cranes that get us to a new level, and the tipping points that make that lift respectable and respected? is this a single stream of evolution or are there multiple streams?In the following diagram I take the Gelperin-Hetzel-based view of software testing history and attempt to express it inthe language of platforms, cranes and tipping points. 19
  20. 20. And another worry... here is a different view of the history (so far) of software testing.Over the most recent few years, has innovation really almost stopped, or is there another explanation?The diagram below (nest page) shows a different view of testing innovation: cause-effect-chained rather than merereportage. The bullet points on the right of the picture are closely related to the material I am about to presentregarding innovation. But how do those factors and aids really operate? 20
  21. 21. 2.5 Emergence between “Too Much Chaos” and “Too Much Order”Now here is a new perspective on the initial ideas about evolution and emergence I expressed above. There are somesuggestions from the scientific literature that life evolves best on “the edge of chaos”:2.6 Innovation and Ideas for TestingA way of looking at testing (bearing in mind things I have said above) is to consider that it is part of an ecosystem withdevelopment, but it lags slightly behind (or far behind, depending on your experience / opinion).Development continually carves a path towards the “chaotic” end of the spectrum, because of market forces and thetypical personality mixes and cultures of programming groups. Conversely, testing tries to keep in step but is drawntowards the “ordered” end of the spectrum by its typical tester psyche and the conservatism and risk-aversion of itsmanagement.I have tried to project the suspected tipping points I described above (psychology to method, method to art, art toengineering etc) onto a swerving path between too much chaos and too much order. 21
  22. 22. There is communication between development and testing/quality disciplines, though development is in the lead.In the platforms, cranes and tipping points illustration a few pages above, I questioned whether anything was wrongwith that picture. Hmm... I think there may be. My perception is that there have been essentially “two cultures” atwork here so far, not understanding each other well enough (see CP Snow, 1956, 1959 etc). The idea of “schools” ofsoftware testing was introduced and publicised as part of the foundation of the Context-Driven School.I suggest that, rather like testing lagging behind development, traditional testing has been lagging behind context-driven. But I think that is at least partly due to the client business communities in finance and other traditionalmarkets having lagged behind the more modern business sectors. The main point however is that the two factions donot communicate enough – more often they do not understand each other, agree to differ, or argue violently andnon-productively.So, have I any suggestions to address this concern? Well, maybe...Author Steven Johnson tells numerous stories of creativity and other innovation in some areas of commonality he hasidentified (see diagram next page). 22
  23. 23. Johnson’s innovations are expressed as seven themes, introduced by the reef-city-web” concepts and wrapped up bya survey of most significant human inventions in recent centuries.The next diagram shows the specific innovation facilitators that aid innovation from platform to platform. 23
  24. 24. The conclusion of the book is that over recent centuries the pattern of innovative environments has changedmarkedly (as illustrated below).So, what are the lessons for software testing for all this? The table below gives some examples. 24
  25. 25. 3. VIVVAT Value-Inspired Verification, Validation And TestingTo renovate the Latin for “long may it live” – VIVVAT a Value-Inspired evolution of Verification, Validation and Testing.We still need all three: if we go to the trouble of writing specifications and developing them from higher-leveldocuments, we need verification. And In this increasingly agile world, we need validation more and more. Testingsuffers from a “two cultures” difficulty, but I hope that science can turn out to be a unifying factor to enable us all towork most effectively in our various contexts. 25
  26. 26. References and AcknowledgementsThe sources below have been the primary inputs to this work. This is not a full bibliography, and may be expanded infuture versions of this paper.I am particularly grateful to colleagues with whom some of these ideas have been developed, both within and outsideclient project work – in particular: Chris Comey of Testing Solutions Group, whose structure for risk-based testing made a useful and complementary counterpart to the method which Paul Gerrard and I published in the 2002 book Risk-Based E-Business Testing. The Software Testing Retreat – a small informal semi-regular gathering started in the UK by EuroSTAR regulars. In recent years this has grown to include some international friends. The original stimulus for the Value Flow ScoreCards idea came from Mike Smith who was interested in testing’s role in IT projects’ “governance”, and the governance of testing itself. Isabel Evans was a major inspiration for my subsequent scorecard ideas which integrated well with her views of quality. My joint presentation with Mike Smith “Holistic Test Analysis & Design” at STARWest 2007 laid the foundations for the ScoreCard idea. Stuart Reid has published material on Risk-Based Testing and on innovation in software testing which contains some similar messages to those in this paper, and to which I have referred: o The Five Major Challenges to Risk-Based Testing; and o Lines of Innovation in Software Testing; Scott Barber blogged some persuasive material in response to the “testing is dead” blogs, and now has a scheme of mission-driven measurements which are aligned to value and risk (similar themes to this paper); and thanks to the Association for Software Testing, its members and the authors and teachers of the Black Box Software Testing series of courses, with whom I have had many fruitful conversations. These have given me a deeper insight into the principles and practices of the Context-Driven school of testing, and how those may be used (where context demands) to more thoughtfully interpret and selectively apply various testing methodologies of various degrees of formality and ceremony.EuroSTAR 2012 T6 Neil Thompson Value-Inspired Testing v1_0.docx 26

×