Using the Machine to predict Testability

928 views

Published on

How can we use the Machine Learning Techniques to predict Software Testability?

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
928
On SlideShare
0
From Embeds
0
Number of Embeds
13
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Using the Machine to predict Testability

  1. 1. Using the Machine to predict Testability<br />Miguel Lopez - ALGORISMI<br />
  2. 2. Agenda<br />Who are we?<br /> Goal of the Presentation <br /> Definition of Testability <br />Some Machine Learning Concepts<br />Predict Testability<br />Conclusion<br />Questions/Answers<br />
  3. 3. Algorismi is the Innovative Software Quality Company.<br />Funded by IRSIB for R&D Projects<br />Experienced Team : Operation,Development and Deployment<br />Innovation and Efficiency (cost / time)<br />Our mission is to help you to<br /><ul><li>DeliverBetter Software
  4. 4. Achieve Sustainable Growth
  5. 5. And to be prepared to change
  6. 6. By Innovating</li></ul>Who are we?<br />3<br />
  7. 7. Some Aphorisms displayed on our Walls<br />“The world is changing very fast. Big will not beat small anymore. It will be the fast beating the slow.” <br />Rupert Murdoch, Chairman and CEO, News Corporation<br />Change is inevitable, stability, and security a myth.<br />Be prepared to change, to anticipate, provoke, participate ... but mostly avoid the subject.<br />Who are we?<br />4<br />
  8. 8. Software Quality Offer<br />X-TRAX<br />Our Test Management tool.<br />Scale<br />Our Code Auditing tool.<br />SQA Service<br />Who are we?<br />5<br />
  9. 9. Goal of the Presentation<br />Show how to use Machine Learning Algorithms in the test management.<br />Based on a toy example, we show how to proceed.<br />Very small example of what we do in our R&D team.<br />We want you to probe this in your company.<br />Who are we?<br />6<br />
  10. 10. Testability<br />Testability as a Set of Factors.<br />ISO defines testability as “attributes of software that bear on the effort needed to validate the software product” [ISO/IEC 9126].<br />Various factors can contribute to testability (obvious).<br />Which are the factors?<br />
  11. 11. Testability Fish-Bone by [Binder].<br />Testability<br />
  12. 12. Testability Factors [Blinder]<br />Documentation: With regards to testing, requirements and specifications are of prime importance.<br />Implementation: The implementation is the target of all testing, and thus the extent to which the implementation al- lows itself to be tested is a key factor of the testing effort.<br />Testability<br />
  13. 13. Testability Factors [Blinder]<br />Test Suite: Factors of the test suite itself also determine the effort required to test. Desirable features of test suites are correctness, automated execution and reuse of test cases.<br />Test Tools: The presence of appropriate test tools can alleviate many problems that originate in other parts of the ‘fish bone’ figure.<br />Process Capability: The organizational structure, staff and resources supporting a certain activity are typically referred to collectively as a (business) process. Properties of the testing process obviously have great influence on the effort required to perform testing.<br />Testability<br />
  14. 14. Focus on Source Code Factor<br />Testability<br />
  15. 15. Some Heuristics for Testability <br />Heuristic #1 Reuse<br />Favor modularity before reuse. Its better to have code duplicates than to delay testing of a component because required changes to a superclass or library class it depends on are pending.<br />Heuristic: Give higher priority to the modularity of a system than to the reuse of components.<br />Testability<br />
  16. 16. Some Heuristics for Testability <br />Heuristic #2 Loose Coupling<br />Loosely coupled software is one where each of its components has, or makes use of, little or no knowledge of the definitions of other separate components.<br />Heuristic: Reduce the number of used classes.<br />Testability<br />
  17. 17. Some Heuristics for Testability <br />Heuristics and Object-Oriented Metrics<br />Testability<br />
  18. 18. Some Heuristics for Testability <br />Reuse - Number of Interfaces<br /><ul><li>An interface in the Java programming language is an abstract type.
  19. 19. Interfaces are declared using the interface keyword, and may only contain method signatures and constant declarations.
  20. 20. Interfaces cannot be instantiated. A class that implements an interface must implement all of the methods described in the interface.</li></ul>Testability<br />
  21. 21. Some Heuristics for Testability <br />Reuse - Abstractness<br />Abstractness = Na/Nc<br />where <br />Na = Number of abstract classes in a package<br />Nc = Number of concrete classes in a package.<br /><ul><li>Abstractness = 0 means a completely concrete package.
  22. 22. Easy to test.
  23. 23. Abstractness = 1 means a completely abstract package.</li></ul>Testability<br />
  24. 24. Some Heuristics for Testability <br />Loose Coupling <br /> Afferent Coupling between Packages<br /><ul><li>Afferent Coupling between packages (Ca) measures the total number of external classes coupled to classes of a package due to incoming coupling (coupling from classes external classes of the package, uses CBO definition of coupling).
  25. 25. Each class counts only once. Zero if the package does not contain any classes or if external classes do not use the package's classes. </li></ul>Testability<br />
  26. 26. Some Heuristics for Testability <br />Loose Coupling<br />Efferent Coupling between Packages<br /><ul><li>Efferent Coupling between packages (Ce) measures the total number of external classes coupled to classes of a package due to outgoing coupling (coupling to classes external classes of the package, uses Ce definition of coupling).
  27. 27. Each class counts only once. Zero if the package does not contain any classes or if external classes are not used by the package's classes. </li></ul>Testability<br />
  28. 28. Metric for Testability<br /><ul><li>We will use a qualitative metric for testability: Testability Level.
  29. 29. Ordinal scale:
  30. 30. High : Highly testable (easy to test)
  31. 31. Medium: normal effort to test.
  32. 32. Low: Lowly testable (difficult to test)
  33. 33. Testability is related to unit testing effort.</li></ul>Testability<br />
  34. 34. What’s Machine Learning ?<br />Machine learning, a branch of artificial intelligence, is a scientific discipline that is concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data.<br />Data can be seen as examples that illustrate relations between observed variables.<br />A major focus of machine learning research is to automatically learn to recognize complex patterns and make intelligent decisions based on data.<br />Machine Learning<br />
  35. 35. What’s Machine Learning ?<br />Many approaches exist in the Machine Learning world.<br />Neural networks, Bayesian networks, clustering,..<br />Machine Learning<br />
  36. 36. Some ML Approaches – Decision Tree<br /> Decision tree learning uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value.<br />Machine Learning<br />
  37. 37. Some ML Approaches – Association Rule<br /> Association rule learning is a method for discovering interesting relations between variables in large databases.<br />A typical and widely-used example of association rule mining is Market Basket Analysis.<br />Example: Association rule "If A and B are purchased then C is purchased on the same trip"<br />Machine Learning<br />
  38. 38. Some ML Approaches – Neural Network<br /> An artificial neural network (ANN), usually called "neural network" (NN), is a mathematical model or computational model that tries to simulate the structure and/or functional aspects of biological neural networks. <br />It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation.<br />They are usually used to model complex relationships between inputs and outputs or to find patterns in data.<br />Example: Milan Lab: Soccer Club<br />Predict injuries of the soccer player.<br />Machine Learning<br />
  39. 39. Some ML Approaches – Clustering<br /> Cluster analysis or clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis.<br />Machine Learning<br />
  40. 40. Some ML Approaches – Clustering<br /> A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph (DAG). <br />For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. <br />Machine Learning<br />
  41. 41. Machine Learning & Statistics<br />Statistics: focus on understanding data in terms of models<br />Statistics: interpretability, hypothesis testing<br />Machine Learning: greater focus on prediction<br />Machine Learning: focus on the analysis of learning<br />algorithms : not just large dataset.<br />Machine Learning<br />
  42. 42. Simple Example - Weather<br />In the weather example, we want to predict the weather conditions to play outside.<br />According to 4 variables (outlook,temperature, humidity, windy), we ask the following question:<br />Can the children play in the garden?<br />Machine Learning<br />
  43. 43. Simple Example - Weather<br />Machine Learning<br />
  44. 44. WEKA – ML Software<br /><ul><li>For the Machine Learning, we use the software Weka.
  45. 45. Weka is an open source project of the Machine Learning Group at University of Waikato.
  46. 46. They have incorporated several standard ML techniques into a software "workbench" called WEKA, for Waikato Environment for Knowledge Analysis.
  47. 47. But what is a Weka?</li></ul>Machine Learning<br />
  48. 48. WEKA – ML Software<br /><ul><li>Found only on the islands of New Zealand, the Weka is a endangered flightless bird.</li></ul>Machine Learning<br />
  49. 49. Simple Example -Weather<br />Two steps process:<br />Train the machine: build the decision tree based upon a data set.<br />Predict on new data set: load the decision tree and ask for a new data set.<br />Question: Can the children play outside with the following weather conditions:<br />Outlook: rainy<br />Temperature: 59°F<br />Humidity: 89<br />Windy: false<br />Machine Learning<br />
  50. 50. Simple Example -Weather<br />See Demo Weka<br />Machine Learning<br />
  51. 51. Metrics for Testability <br />Predict Testability<br />
  52. 52. Metrics for Testability <br /><ul><li>We will proceed the same way as in the weather example.
  53. 53. OO Metrics are the attributes used to predict.
  54. 54. Outlook, temperature, humidity, windy
  55. 55. Attribute to predict is testability.
  56. 56. Play outside.</li></ul>Predict Testability<br />
  57. 57. Data for Testability<br /><ul><li>We analyzed 9 packages of Java Application (Scale).
  58. 58. For each package, we compute the different metrics with the Eclipse plugin Metrics.
  59. 59. Number of Interfaces
  60. 60. Abstractness
  61. 61. Afferent Coupling
  62. 62. Efferent Coupling
  63. 63. For each package, we assess the testability level of the package.</li></ul>Predict Testability<br />
  64. 64. Data for Testability: ARFF Format<br />Predict Testability<br />
  65. 65. Simplistic Model<br /><ul><li>Metrics model is too simplistic.
  66. 66. Inheritance
  67. 67. Cyclomatic complexity
  68. 68.
  69. 69. The testability fish-bone must be measured.
  70. 70. Qualitative & Quantitative measures.</li></ul>Conclusion<br />
  71. 71. Automate ML Process<br /><ul><li>Measurement Process must be automated.
  72. 72. Many Metrics Tools can be automated.
  73. 73. Learning & Prediction processes must be automated.
  74. 74. WEKA API.</li></ul>Conclusion<br />
  75. 75. Further Information<br /><ul><li>miguel.lopez@algorismi.com
  76. 76. www.algorismi.com
  77. 77. http://twitter.com/x_trax</li></ul>Contact<br />
  78. 78. Bibliography<br />[Blinder] R. Binder. Design for testability in object-oriented systems. Comm. of the ACM, 37(9):87–101, 1994.<br />Predicting Class Testability using Object-Oriented Metrics, M. Bruntink and A. van Deursen.<br />Data mining: practical machine learning tools and techniques Par Ian H. Witten,Eibe Frank<br />

×