Se lect12 btech
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
279
On Slideshare
279
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
2
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Deliverables by Phase
  • 2. Deliverables by Phase Possible Deliverables by Phase  Concept Document  Statement of Work (SOW)  Project Charter  RFP & Proposal Software  Requirements Document (Software Requirements Specification) Concept  Work Breakdown Structure (WBS) Requirements  Functional Specification ( Top Level Design Specification)  Entity Relationship Diagram  Data Flow Diagram Analysis  Detailed Design Specification  Object Diagrams  Detailed Data Model  Project Development Plan  (Software Development Plan ) Design  Coding Standards  Baseline Project Plan  Working Code  Quality Assurance Plan  Unit Tests  Configuration Management Plan Coding and  Risk Management Plan Debugging  Acceptance Test Procedures  Tested Application  Integration Plan Systems  Detailed SQA Test Plan Testing  Maintenance Specification  SQA Test Cases  Deployed Application Deployment &  User Documentation Maintenance  Training Plan 2
  • 3. Risk managementRisk management is concerned with identifying risks and drawing up plans to minimise their effect on a project.A risk is a probability that some adverse circumstance will occur.  Project risks affect schedule or resources  Product risks affect the quality or performance of the software being developed  Business risks affect the organisation developing or procuring the software
  • 4. Software risks Risk Risk type Description Staff turnover Project Experienced staff will leave the project before it is finished. Management change Project There will be a change of organisational management with different priorities. Hardware unavailability Project Hardware which is essential for the project will not be delivered on schedule. Requirements change Project and There will be a larger number of product changes to the requirements than anticipated. Specification delays Project and Specifications of essential interfaces product are not available on schedule Size underestimate Project and The size of the system has been product underestimated. CASE tool under- Product CASE tools which support the performance project do not perform as anticipated Technology change Business The underlying technology on which the system is built is superseded by new technology. Product competition Business A competitive product is marketed before the system is completed.
  • 5. The Risk Management Process• Risk identification – Identify project, product and business risks• Risk analysis – Assess the likelihood and consequences of these risks• Risk planning – Draw up plans to avoid or minimise the effects of the risk• Risk monitoring – Monitor the risks throughout the project
  • 6. The risk management process Risk Risk analysis Risk planning Risk identification monitoringList of potential Risk avoidance Risk Prioritised risk and contingency risks list assessment plans
  • 7. Risk identification• Technology risks• People risks• Organisational risks• Requirements risks• Estimation risks
  • 8. Risks and risk typesRisk type Possible risksTechnology The database used in the system cannot process as many transactions per second as expected. Software components which should be reused contain defects which limit their functionality.People It is impossible to recruit staff with the skills required. Key staff are ill and unavailable at critical times. Required training for staff is not available.Organisational The organisation is restructured so that different management are responsible for the project. Organisational financial problems force reductions in the project budget.Tools The code generated by CASE tools is inefficient. CASE tools cannot be integrated.Requirements Changes to requirements which require major design rework are proposed. Customers fail to understand the impact of requirements changes.Estimation The time required to develop the software is underestimated. The rate of defect repair is underestimated. The size of the software is underestimated.
  • 9. Risk analysis• Assess probability and seriousness of each risk• Probability may be – very low – low – moderate – high or very high• Risk effects might be – catastrophic – serious – Tolerable – insignificant
  • 10. Risk analysis Risk Probability Effects Organisational financial problems force reductions Low Catastrophic in the project budget. It is impossible to recruit staff with the skills High Catastrophic required for the project. Key staff are ill at critical times in the project. Moderate Serious Software components which should be reused Moderate Serious contain defects which limit their functionality. Changes to requirements which require major Moderate Serious design rework are proposed. The organisation is restructured so that different High Serious management are responsible for the project. The database used in the system cannot process as Moderate Serious many transactions per second as expected. The time required to develop the software is High Serious underestimated. CASE tools cannot be integrated. High Tolerable Customers fail to understand the impact of Moderate Tolerable requirements changes. Required training for staff is not available. Moderate Tolerable The rate of defect repair is underestimated. Moderate Tolerable The size of the software is underestimated. High Tolerable The code generated by CASE tools is inefficient. Moderate Insignificant
  • 11. Risk planningConsider each risk and develop a strategy to manage that riskAvoidance strategies  The probability that the risk will arise is reducedMinimisation strategies  The impact of the risk on the project or product will be reducedContingency plans  If the risk arises, contingency plans are plans to deal with that risk
  • 12. Risk management strategies Risk Strategy Organisational Prepare a briefing document for senior management showing financial problems how the project is making a very important contribution to the goals of the business. Recruitment Alert customer of potential difficulties and the possibility of problems delays, investigate buying-in components. Staff illness Reorganise team so that there is more overlap of work and people therefore understand each other’s jobs. Defective Replace potentially defective components with bought-in components components of known reliability. Requirements Derive traceability information to assess requirements change changes impact, maximise information hiding in the design. Organisational Prepare a briefing document for senior management showing restructuring how the project is making a very important contribution to the goals of the business. Database Investigate the possibilit y of buying a higher-performance performance database. Underestimated Investigate buying in components, investigate use of a program development time generator.
  • 13. Risk monitoring• Assess each identified risks regularly to decide whether or not it is becoming less or more probable• Also assess whether the effects of the risk have changed• Each key risk should be discussed at management progress meetings
  • 14. Risk factorsRisk type Potential indicatorsTechnology Late delivery of hardware or support software, many reported technology problemsPeople Poor staff morale, poor relationships amongst team member, job availabilityOrganisational organisational gossip, lack of action by senior managementTools reluctance by team members to use tools, complaints about CASE tools, demands for higher-powered workstationsRequirements many requirements change requests, customer complaintsEstimation failure to meet agreed schedule, failure to clear reported defects
  • 15. Software Measurement &Matrices
  • 16. Software MeasurementObjectives – Assessing status • Projects • Products for a specific project or projects • Processes • Resources – Identifying trends • Need to be able to differentiate between a healthy project and one that’s in trouble – Determine corrective action • Measurements should indicate the appropriate corrective action, if any is required. 16
  • 17. Software Measurement Objectives • Types of information required to understand, control, and improve projects: – Managers • What does the process cost? • How productive is the staff? • How good is the code? • Will the customer/user be satisfied? • How can we improve? – Engineers • Are the requirements testable? • Have all the faults been found? • Have the product or process goals been met? • What will happen in the future? 17
  • 18. The Scope of Software Metrics – Cost and effort estimation – Productivity measures and models – Data collection – Quality models and measures – Reliability models – Performance evaluation and models – Structural and complexity metrics – Capability-maturity assessment – Management by metrics – Evaluation of methods and tools 18
  • 19. The Scope of Software Metrics • The Scope of Software Metrics – some details – Possible productivity model Productivity Cost Value Personnel Resources Complexity Quality Quantity Time HW Env ProblemReliability Defects Size Functionality Cnstrst difficulty Money SW 19
  • 20. The Scope of Software Metrics• The Scope of Software Metrics – some details – Software quality model Use Factor Criteria Communicativeness Usability Accuracy Product Reliability Operatio Consistency n Efficiency Device Efficiency Accessibility Reusability Metrics Completeness Product Maintainability Structuredness Revisio Conciseness n Portability Device Independence Testability Legibility Self-descriptiveness 20 Traceability
  • 21. Direct and Indirect Matrices
  • 22. Measurement Basics• Direct and Indirect Measurement – Direct measure – relates an attribute to a number or symbol without reference to no other object or attribute (e.g., height). – Indirect measure • Used when an attribute must be measured by combining several of its aspects (e.g., density) • Requires a model of how measures are related to each other 22
  • 23. Measurement Basics• Direct and Indirect Measures for Software – examples – Direct • Length or source code (lines of code) • Duration of testing process • Number of defects discovered during test • Time a developer spends on a project – Indirect • Programmer productivity (LOC/workmonths of effort) • Module defect density (number of defects/module size) • Defect detection efficiency (# defects detected/total defects) • Requirements stability (initial # requirements/total # requirements) • Test effectiveness ratio (number of items covered/total number of items) • System spoilage (effort spent fixing faults/total project effort) 23
  • 24. Quality Models and Measures
  • 25. Software product quality metrics• The quality of a product:- the “totality of characteristics that bear on its ability to satisfy stated or implied needs”. Metrics of the external quality attributes producer’s perspective: “conformance to requirements” customer’s perspective: “fitness for use” - customer’s expectations
  • 26. Quality metrics• Two levels of software product quality metrics: Intrinsic product quality Customer oriented metrics
  • 27. Intrinsic product quality metrics:  Reliability: number of hours the software can run before a failure  Defect density (rate): number of defects contained in software, relative to its size.Customer oriented metrics:  Customer problems  Customer satisfaction
  • 28. Intrinsic product quality metrics Reliability --- Defect density• Correlated but different!• Both are predicted values.• Estimated using static and dynamic modelsDefect: an anomaly in the product (“bug”)Software failure: an execution whose effect is not conform tosoftware specification
  • 29. ReliabilityReliability metrics:MTBF (Mean Time Between Failures)MTTF (Man Time To Failure)
  • 30. MTBF (Mean Time Between Failures): the expected time between two successive failures of a system expressed in hours a key reliability metric for systems that can be repaired or restored(repairable systems) applicable when several system failures are expected.For a hardware product, MTBF decreases with the its age.
  • 31. MTTF (Man Time To Failure): the expected time to failure of a system in reliability engineering  metric for non-repairable systems non-repairable systems can fail only once; example, a satellite is not repairable.Mean Time To Repair (MTTR): average time to restore a system after a failureWhen there are no delays in repair: MTBF = MTTF + MTTRSoftware products are repairable systems!Reliability models neglect the time needed to restore the system after a failure.with MTTR =0  MTBF = MTTFAvailability = MTTF / MTBF = MTTF / (MTTF + MTTR)
  • 32. 3.1.2. Defect rate (density) Number of defects per KLOC or per Number of Function Point, in a given time unitExample:“The latent defect rate for this product, during next four years, is 2.0defects per KLOC”. Crude metric: a defect may involve one or more lines of codeLines Of Code-Different counting tools-Defect rate metric has to be completed with the counting method for LOC!-Not recommended to compare defect rates of two products written indifferent languages
  • 33. Reliability or Defect Rate ?Reliability: often used with safety-critical systems such as: airline traffic control systems,avionics, weapons. (usage profile and scenarios are better defined)Defect density: in many commercial systems (systems for commercial use) • there is no typical user profile • development organizations use defect rate for maintenance cost and resource estimates • MTTF is more difficult to implement and may not be representative of all customers.
  • 34. Customer Oriented Metrics Customer Problems Metric Customer problems when using the product: valid defects, usability problems, unclear documentation, user errors. Problems per user month (PUM) metric: PUM = TNP/ TNM TNP: Total number of problems reported by customers for a time period TNM: Total number of license-months of the software during the period = number of install licenses of the software x number of months in the period
  • 35. 3.2.2. Customer satisfaction metrics Often measured on the five-point scale: 1. Very satisfied 2. Satisfied 3. Neutral 4. Dissatisfied 5. Very dissatisfied IBM: CUPRIMDSO (capability/functionality, usability, performance, reliability, installability, maintainability, documentation /information, service and overall) Hewlett-Packard: FURPS (functionality, usability, reliability, performance and service)
  • 36. Ishikawa’s Seven Basic Tools forQuality Control • Checklist (or Check Sheet) – to facilitate gathering data and to arrange data so it can be easily used later • Pareto Diagram – a frequency chart of bars in descending order; the bars are usually associated with types of problems • Histogram – a graphic representation of frequency counts of a sample or a population • Scatter Diagram – portrays the relationship of two interval variables; can make outliers clear • Run Chart – tracks the performance of the parameter of interest over time; used for trend analysis • Control Chart – an advance form of a run chart for situations in which the process capability can be defined • Cause and Effect Diagram (fishbone diagram) – it shows the relationship between a characteristic and the factors that affect that relationship
  • 37. Ishikawa’s Seven Basic Tools forQuality Control
  • 38. Checklists • Summarize the key points of the software development process • More effective than lengthy process documents • Help ensure that all tasks are complete and the important factors or quality characteristics of each task are covered • Examples of checklists are: – Design review checklist – Code inspection checklist – Moderator (for review and inspection) checklist – Pre-code-integration checklist – Entrance and exit criteria for system tests – Product readiness checklist
  • 39. Pareto Diagram• Identifies areas that cause most of the problems
  • 40. Pareto Analysis of SoftwareDefects
  • 41. Pareto Diagram of Defect byComponent Problem Index
  • 42. Histograms
  • 43. Profile of Customer Satisfactionwith a Software Product
  • 44. Scatter Diagram of ProgramComplexity and Defect Level
  • 45. Correlation of Defect Rates ofReused Components BetweenTwo Platforms
  • 46. Grouping of Reused ComponentsBased on Defect RateRelationship
  • 47. Pseudo-Control Chart of TestDefect Rate—First Iteration
  • 48. Pseudo-Control Chart of TestDefect Rate—Second Iteration
  • 49. Pseudo-Control Chart ofInspection Effectiveness
  • 50. Cause-and-Effect Diagram
  • 51. Cause-and-Effect Diagram ofDesign Inspection
  • 52. A Schematic Representation of aRelations Diagram