Data What Type Of Data Do You Have V2.1

1,772 views

Published on

Data for Statistics - A discussion about Data Types not found in the CMMI

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,772
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
35
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Data What Type Of Data Do You Have V2.1

  1. 1. DATA<br />
  2. 2. Data<br />Data – Input for Analysis and Interpretation<br />Data are generally collected as a basis for action<br />You must always use some method of analysis to extract and interpret the information that lies in the data<br />The type of data that has been collected will determine the type of statistics or analysis that can be performed<br />Making sense of the data is a process in itself<br />Always provide a “context” for data<br />Data has no meaning apart for their context<br />Data should always be presented in such a way that preserves the evidence in the data for all the predictions that might be made from these data<br />
  3. 3. Data - 2<br />Data should be completely and fully described<br />Who collected the data?<br />How were the data collected?<br />When were the data collected?<br />Where were the data collected?<br />What do these values represent?<br />If the data are computed values, how were the values computed from the raw inputs?<br />
  4. 4. Data - 3<br />Variation exists in all data and consists of both noise (random or common cause variation) and signal (nonrandom or special cause variation)<br />Without formal and standardized approaches for analyzing data, you may have difficulty interpreting and using your measurement results<br />When you interpret and act on measurement results, you are presuming that the measurements represent reality<br />
  5. 5. Data - 4<br />To use data safely, you must have simple and effective methods not only for detecting signals that are surrounded by noise, <br />but also for recognizing and dealing with normal process variations when there are no signals present<br />Drawing conclusions and predictions from data depends not only on using appropriate analytical methods and tools, <br />but also on understanding the underlying nature of the data and the appropriateness of assumptions about the conditions and environments in which the data were obtained<br />
  6. 6. Data Definitions<br />Categorical vs. Quantitative Variables - Variables can be classified as categorical (aka, qualitative) or quantitative (aka, numerical)<br />Categorical - Categorical variables take on values that are names or labels. The color of a ball (e.g., red, green, blue) or the breed of a dog (e.g., collie, shepherd, terrier) would be examples of categorical variables. <br />Quantitative - Quantitative variables are numerical. They represent a measurable quantity. <br />For example, when we speak of the population of a city, we are talking about the number of people in the city - a measurable attribute of the city. Therefore, population would be a quantitative variable<br />
  7. 7. Data Definitions - 2<br />Discrete vs. Continuous Variables - Quantitative variables can be further classified as discrete or continuous. <br />If a variable can take on any value between two specified values, it is called a continuous variable; otherwise, it is called a discrete variable. <br />Examples to clarify the difference between discrete and continuous variables.<br />Suppose the fire department mandates that all fire fighters must weigh between 150 and 250 pounds. The weight of a fire fighter would be an example of a continuous variable; since a fire fighter's weight could take on any value between 150 and 250 pounds. <br />Suppose we flip a coin and count the number of heads. The number of heads could be any integer value between 0 and plus infinity. However, it could not be any number between 0 and plus infinity. We could not, for example, get 2.5 heads. Therefore, the number of heads must be a discrete variable.<br />
  8. 8. Attributes Data vs. Variables Data<br />
  9. 9. Variables Data<br />Variables data is measured and plotted on a continuous scale<br />With variables data, an actual numeric estimate is derived for one or more characteristics of the population being sampled such as:<br />Time<br />Temperature<br />Length<br />Weight<br />Height<br />Volume<br />Voltage<br />Horsepower<br />Torque<br />Speed<br />Cost<br />
  10. 10. Variables Data - 2<br />In software, examples of variables data include:<br />Effort expended - (Number of hours, days, weeks, years, etc., that have been expended by a workforce member on an identified topic)<br />Years of experience - (Total number of years of experience per category)<br />Memory utilization - (% of total memory available)<br />CPU utilization - (% of CPU used at any given moment in time)<br />Cost of rework - (Dollars and cents calculation of the rework based on the effort put forth by anyone involved in the finding and fixing of reported problems)<br />
  11. 11. “Counts” Could Be Treated as Variables Data<br />There are many situations where “counts” get used as measures of size:<br />Total number of requirements<br />Total lines of code<br />Total bubbles in a data-flow diagram<br />Customer sites<br />Change requests received<br />Total people assigned to a project<br />When we count these things, we are counting all the entities in a population, not just the occurrence of entities with specific attributes<br />These should always be treated as “variables” data even though they are instances of discrete counts<br />
  12. 12. Attributes Data<br />When working with attributes data, the focus is on learning about one or more specific non-numerical characteristics of the population being sampled<br />When attributes data are used for direct comparisons, they must be based on consistent “areas of opportunity” if the comparisons are to be meaningful<br />If the number of defects that are likely to be observed depends on the size (lines of code) of a module or component, all sizes must be nearly equal<br />If the probabilities associated with defect discovery depend on the time spent on inspecting or testing the elapsed time spent must be nearly equal<br />
  13. 13. Attributes Data - 2<br />In general, when the areas of opportunity for observing a specific event are not equal or nearly so, the chances of observing the event will differ across the observations<br />Then we must normalize (convert to rates) by dividing each count by its area of opportunity before valid comparisons are made<br />Conditions that make us willing to assume constant areas of opportunity seem to be less in software environments<br />Normalization is almost always needed for software!<br />
  14. 14. Attributes Data - 3<br />Example: <br />If the defects are being counted and the size of an item inspected influences the number of defects found, some measure of item size will also be needed to convert defect counts to relative rates that can be compared in meaningful ways (defects per lines of code)<br />If the variations in the amount of time spent inspecting or testing can influence the number of defects found, these times should be clearly defined and measured as well<br />
  15. 15. Attributes Data - 4<br />One of the keys to making effective use of attributes data lies in preserving the ordering of each count in space and time<br />Sequence information (the order in time or space in which the data is collected) is almost always needed to correctly interpret counts of attributes<br />Make the counts specific – Make sure there is an operational definition (clear set of rules and procedures) for recognizing an attribute or entity if what gets counted is to be what the user of the data expects the data to be<br />
  16. 16. Attributes Data - 5<br />Attributes data is counted and plotted as discrete events:<br />Shipping errors<br />Percentage waste<br />Number of defects found<br />Number of defective items<br />Number of source statements of a given type<br />Number of lines of comments in a module of n lines<br />Number of people with certain skills on a project<br />Percentage of projects using formal inspections<br />Team size<br />Elapsed time between milestones<br />Staff hours logged per task<br />Backlog<br />Number of priority-one customer complaints<br />Percentage of non-conforming products in the output of an activity or a process<br />
  17. 17. The Key to Classifying Data<br />The key to classifying data as attributes data or variables data depends not so much on whether the data are discrete or continuous, but on how they are collected and used<br />The total number of defects found is often used as a measure of the amount of rework or retesting to be performed<br />It is viewed as a measure of size and treated as variables data<br />It is normally used as a count based on attributes<br />The method of analysis you choose for any data will depend on:<br />The questions you are asking<br />The data distribution model you have in mind<br />The assumptions you are willing to make with respect to the nature of the data (Page 79)<br />
  18. 18. Data Type Classifications<br />Discrete<br />Continuous<br />
  19. 19. Distributional ModelsRelationship to Chart Types<br />Each type of chart is related to a set of assumptions (a distributional model) that must hold for that type of chart to be valid. <br />There are six types of charts for “attributes data”<br />NP<br />P<br />C<br />U<br />XmR for counts<br />XmR for rates<br />
  20. 20. XmR charts have an advantage over np, p, c, and u charts in that they require fewer and less stringent assumptions<br />They are easier to plat and use<br />They have wide applicability<br />Recommended by many quality-control professionals<br />When assumptions of the distributional model are met, however, the more specialized np, p, c, and u charts can give better bounds for control limits and can offer advantages<br />Distributional Models Relationship to Chart Types - 2<br />
  21. 21. Distributional ModelsRelationship to Chart Types - 3<br />NP Chart – An np chart is used when the count data are binomially distributed and all samples have equal areas of opportunity<br />These conditions occur in manufacturing settings – when there is 100% of lots of size n (n is constant) and the number of defective units in each lot is recorded<br />P Chart – a p chart is used when the data are binomially distributed but the areas of opportunity vary from sample to sample<br />A p chart could be appropriate if the lot size n were to change from lot to lot<br />
  22. 22. Distributional ModelsRelationship to Chart Types - 4<br />C Chart – a c chart is used when the count data are samples from a Poisson distribution and the samples all have equal-sized areas of opportunity<br />U Chart – a u chart is used in place of a c chart when the count data are samples from a Poisson distribution and the areas of opportunity are not constant<br />Defects per thousand lines of code is an example for software<br />NP, P, C and U charts are the traditional control charts used with attributes data<br />XmR Chart – Useful when little is known about the underlying distribution of when the justification for assuming a binomial or Poisson process is questionable<br />Almost always a reasonable choice<br />
  23. 23. Distributional ModelsRelationship to Chart Types - 5<br />More About U Charts – U charts seem to have the greatest prospects for use in software settings<br />U charts require normalization (conversion to rates) when the areas of opportunity are not constant <br />Poisson might be appropriate when counting the number of defects in modules during inspection or testing<br />Defects per thousand lines of source code is an example of attributes data that is a candidate for u charts<br />Although u charts may be appropriate for studying software defect densities in an operational environment, we are not aware of any empirical studies that have generally validated the use of Poisson models for nonoperational environments such as inspections<br />
  24. 24. Distributional ModelsRelationship to Chart Types - 6<br />Defects per module or defects per test are unlikely candidates for u charts, c charts, or any other charts for that matter<br />The ratios are not based on equal areas of opportunity – Can’t be normalized<br />There is no reason to expect them to be constant across all modules or tests when the process is in statistical control<br />
  25. 25. Distributional ModelsRelationship to Chart Types - 7<br />If you are uncertain as to the model that applies, it can make sense to use more than one set of charts<br />If you think you may have a Poisson situation but are not sure that all conditions for a Poisson process are present, then plotting both a u chart and the corresponding XmR charts should bracket the situation<br />If both charts point to the same conclusions, you are unlikely to be led astray<br />If the conclusions differ, then you should investigate your assumptions or the events<br />
  26. 26. Presenting Data<br />While it is simple and easy to compare one number with another, such comparisons are limited and weak<br />Limited because the small amount of data used<br />Weak because both of the numbers are subject to variation<br />This makes it difficult to determine just how much of the differences between the values is due to variation in numbers and how much is due to real changes in the process<br />
  27. 27. Presenting Data - 2<br />Graphs – there are two basic graphs that are the most helpful is providing the context for interpreting the current value<br />Time series graph (Run Chart)<br />Have months or years marked off on the horizontal axis and possible values marked off on the vertical axis<br />As you move from left to right, there is a passage of time<br />By visually comparing the current value with the plotted values for the preceding months you can quickly see if the current value is unusual or not<br />Histogram (Tally Plot)<br />An accumulation of the different values as they occur without trying to display the time order sequence<br />
  28. 28. Run Charts<br />Number of Required Changes to a Module<br /> as the Project Approaches Systems Test<br />Syntax<br />Check<br />Desk<br />Check<br />Code<br />Review<br />Unit<br />Test<br />Integration<br />and Test<br />Systems<br />Test<br />
  29. 29. 20<br />18<br />16<br />14<br />12<br />10<br />Number of Days<br />8<br />6<br />4<br />2<br />0<br />32<br />56<br />48<br />46<br />44<br />42<br />40<br />38<br />36<br />54<br />52<br />50<br />34<br />Product – Service Staff Hours<br />Histograms <br />
  30. 30.  <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br /> <br />PROCESS CONTROL CHART TYPE: <br />METRIC:<br />A point above or below the<br />control lines suggests that the<br />measurement has a special<br />preventable or removable cause<br />Upper<br />Control<br />Limit<br />(UCL)<br /> <br />The chart is used for continuous <br />and time control of the process <br />and prevention of causes <br />Upper and<br />Lower<br />Control Limits<br />represent the<br />natural variation<br />In the process<br />Center Line (CL)<br />(Mean of data used to<br />set up the chart)<br />The chart is analyzed using <br /> standard Rules to define the<br /> control status of the process <br />Plotted points are either<br />individual measurements or the<br />means of small groups of<br />measurements <br />Lower<br />Control<br />Limit<br />(LCL)<br /> <br /> <br />Data<br />relating to<br />the process<br />Statistical Methods for Software Quality<br />Adrian Burr – Mal Owen, 1996<br />Numerical data taken<br />in time sequence<br />
  31. 31. Impacts of Poor Data Quality<br />Inability to conduct hypothesis and predictive modeling<br />Inability to manage the quality and performance software or application development<br />Ineffective process change instead of process improvement<br />Ineffective and inefficient testing causing issues with time to market, field quality, and development costs<br />Products that are costly to use within real-life usage profiles<br />
  32. 32. References<br />Brassard, Michael & Ritter, Diane, The Memory Jogger II – A Pocket Guide of Tools for Continuous Improvement & Effective Planning, GOAL/QPC, Salem, New Hampshire, 1994<br />Florac, W.A. & Carleton, A.D. Measuring the Software Process Addison-Wesley, 1999<br />Six Sigma Academy, The Black Belt Memory Jogger – A Pocket Guide for Six Sigma Success, GOAL/QPC, Salem, New Hampshire, 2002<br />Wheeler, Donald J. Understanding Variation: The Key to Managing Chaos, Knoxville, Tennessee: SPC Press, 2000<br />

×