• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
'Automated Reliability Testing via Hardware Interfaces' by Bryan Bakker
 

'Automated Reliability Testing via Hardware Interfaces' by Bryan Bakker

on

  • 328 views

The case study described in this presentation has taken place at a medical equipment manufacturer. The product developed was a medical x-ray device used during surgery operations. The system generates ...

The case study described in this presentation has taken place at a medical equipment manufacturer. The product developed was a medical x-ray device used during surgery operations. The system generates x-rays (called exposure) and a detector creates images of the patient based on the detected x-ray beams (called image acquisition). The image pipeline is real-time with several images per second, so the surgeon can e.g. see exactly where he is cutting the patient.



The presentation describes the approach that has been taken to develop an automatic testing framework in order to execute reliability test cases and identify reliability issues. To achieve the control of the system under test, the existing hardware interfaces (physical buttons of the different keyboards, handswitches and footswitches) were used to inject the system with actions (with the use of LabVIEW). This has been done to minimize the so-called probe effect.



The expected results of the test cases have been automatically retrieved from the log files generated by the system. This way the test framework could react on system failures immediately, without wasting valuable test time on scarce test systems. The log files were used to extract information about the performed actions and failures in order to measure the MTBF (Mean Time Between Failures) of different critical system functions (like start-up of the system, and image acquisition). The Crow-AMSAA model for reliability measurements has been chosen to report reliability metrics to the organization. A Return-On-Investment calculation has been performed to get buy-in from senior management who provided additional funding to further develop the testing framework, and to apply the same ideas to different products and projects.



The presentation explains the points which were crucial for the success of this approach to automated reliability testing and briefly explains future plans and extensions (e.g. operational profiles).

Statistics

Views

Total Views
328
Views on SlideShare
328
Embed Views
0

Actions

Likes
1
Downloads
7
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Founded in 1997 Software services, products (remote management platform) and product development partner for many companies Annual turnover: €15 million euro 170+ (embedded) software specialists Secondment and Development Centre (NL and RU) Located in Eindhoven NL (HQ), Utrecht NL, Herentals B, Moscow RU and Nederweert NL (NBG=electronics engineering) Active for international customers, multinationals as well as small companies. Sioux works for high-tech companies in Europe among which ASML, Assembléon, Bosch, FEI Company, ICOS Vision Systems, Niko, NXP Semiconductors, Océ, Philips, Siemens, Sony, Thomson and TomTom Focus on: Consumer Electronics and Telecom (CET) Medical and Homecare (MH) Professional Equipment Manufacturing (PEM) Automotive (A)
  • Existing system  Maintenance and new features Quality and testing embedded in development and release process Picture: - mention mobility -> wheels - patient lies on table between xray source (bottom) and detector (top) - mention different viewing cabinet (important in later sheet), not on picture
  • Primary failures: System startup results in not working system (failed startup) Running acquisitions are aborted Secondary failures: Viewing old/archived images was not defined as a primary function, but when viewing these images results in a not working system (for example due to a software crash) this would be a secondary failure Tertiary failures: Not all images are stored correctly on a USB medium. This is not a primary or secondary failure as USB storage media are not considered to be used for archiving. For archiving of medical images other independent systems exist which are much more reliable. Thus a failure in this storage function does not count as a reliability hit. XRay images cannot be printed due to an internal failure.
  • Explain differences: Ruby…. That’s it
  • Explain differences: Info from SUT via HAL to test case (in test execution environment) - Use of software interface (e.g. to get state information) Repository to store test cases, libraries and results Test scheduler Logfile is still there but not in picture
  • Same data Not cum. failures, but MTBF (Mean Time Between Failure) Why? More people tend to understand MTBF Similar plots for acquisition
  • Independent: system engineer not part of automation team (thus independent) with knowledge of the test process and customer use.
  • X1 and X2 are average costs. Of course some defects are cheaper other are more expensive. X1 and X2 provided by management. Supporting other products quite easy due to hardware interface Savings of course based on the assumptions!!
  • Startup MTBF was: 500 startups now: 3800 startup Acqusition MTBF was: 49 minutes now: 880 minutes MTBF: not necessarily the MTBF as in the field! Usage in the field is different! MTBF measurements are based on the reliability tests. Trend of MTBF can be used during development project.
  • Only the last point is a technical point.

'Automated Reliability Testing via Hardware Interfaces' by Bryan Bakker 'Automated Reliability Testing via Hardware Interfaces' by Bryan Bakker Presentation Transcript

  • Automated ReliabilityTesting via hardwareinterfacesEuroSTAR 2011Bryan BakkerNovember, 2011
  • © Sioux Embedded Systems | Confidential | 2Contents Sioux Intro – The need for action Increment 1 – First success Buy-In from management Increment 2 – A language for the testers Increment 3 – Logfile interpretation ROI and Crow-AMSAA Results – key success factors2
  • © Sioux Embedded Systems | Confidential | 3About Bryan Bakker Test Expert Certifications: ISTQB, TMap, Prince2 Member of ISTQB Expert Level on Test Automation Accredited tutor of ISTQB Foundation Domains: medical systems, professional securitysystems, semi-industry, electron microscopy Specialties: test automation, integration testing, designfor testability, reliability testing3
  • About Sioux© Sioux Embedded Systems | Confidential | 4HERENTALSMOSCOWNEDERWEERTEINDHOVENUTRECHT
  • © Sioux Embedded Systems | Confidential | 5Intro – The need for actionMedical Surgery Device: X-ray exposure + acquisition during surgery activities Real-time image chain Mobile device (frequently off/on) Quality and testing consideredimportant in organizationReliability was an issue: “Frequent” startup failures Aborted acquisitions Always safe… but not reliable!5
  • © Sioux Embedded Systems | Confidential | 6The need for action Reliability issues are nasty to analyse, solve and test Fixing defects in field (remember Boehm) Impact on other projects (development + test resources) High service costs Troublesome system test (up to 15 cycles!!)6Requirements Design Implementation Test OperationCost of defect fix (Barry Boehm)
  • © Sioux Embedded Systems | Confidential | 7The need for action Start with automated reliability tests (simple, short) Quick and dirty at first No SW resources available To help with automation Implementing test interfaces Goal: show (quick) that reliability issues can bereproduced Expectation: … then attention and funding wouldincrease7
  •  Hardware interfaces used to invoke actions on SUT Buttons on different keyboards Handswitches Footswitches Different power-switches LabVIEW generates hardware signals Test cases defined in LabVIEW Only logfiles stored, no other verification performed No software changes needed for this approachIncrement 1 – First success© Sioux Embedded Systems | Confidential | 8
  • Increment 1 – First success© Sioux Embedded Systems | Confidential | 9
  • Increment 1 – First success Simple, but quick first results Multiple reliability issues found Work to do for the developers© Sioux Embedded Systems | Confidential | 10System Under TestHardware Abstraction Layer(LabVIEW)Input (Hardware)Test cases(LabVIEW)ControlOutputLog fileTest Framework1stIncrement
  •  Several defects found were already known: Customer issues Not reproducible -> no solution Now: developers could work on them And fix could be tested as well Several presentations given explaining the approach And… get clear what we are looking for! Primary functions should work reliableManagement buy-in© Sioux Embedded Systems | Confidential | 11
  • Definition of Reliability Hit© Sioux Embedded Systems | Confidential | 12PF = Primary FunctionPrimary function: e.g. startup, acquisitionNon-primary function: e.g. printing, post-viewing
  •  LabVIEW is not that easy Provide general scripting language (Ruby) Ruby interfacing with LabVIEW via abstraction layer Development of test libraries was started Still only control, no verification Log file analysis after test (tools were used)Increment 2A language for the testers© Sioux Embedded Systems | Confidential | 13
  • Increment 2A language for the testers© Sioux Embedded Systems | Confidential | 14System Under TestHardware Abstraction Layer(LabVIEW)Input (Hardware)Test cases + libraries(Ruby)ControlOutputLog fileTest Framework2ndIncrement
  •  Logfile scanned during test case execution Determine pass/fail criteria Detect system states and act upon: Hot generator  extensive acquisition not possible Execute other test cases (e.g. power-cycle), until Generator has cooled down Log file analysis after test was still performed Still no software changes in the SUT, but existinginterfaces were available nowIncrement 3Logfile interpretation© Sioux Embedded Systems | Confidential | 15
  • Increment 3Logfile interpretation© Sioux Embedded Systems | Confidential | 16System Under TestHardware Abstraction Layer(LabVIEW)Input (hardware &software)Test Execution Environmentincl. test cases and library(Ruby)ControlRepository(Testcases+Results)Test Framework3rdIncrementOutputResultTest Scheduler(Ruby)
  •  Best practise: Test actions by external interfaces Test verification by log file and internal stateinformation System statistics extracted from logfile: Number of startups (succeeded and failed) Number of acquisitions (succeeded and failed)Increment 3Logfile interpretation© Sioux Embedded Systems | Confidential | 17
  •  Reliability hits could be identified from logfile (semi-automatic) Pareto charts Performance measurements(timing info in logfile) Crow-AMSAAStatistics© Sioux Embedded Systems | Confidential | 18
  • Crow-AMSAA Failure PlotExample© Sioux Embedded Systems | Confidential | 19Cum. Number ofstartupsCum.Numberof failedstartupsIndividualtestrunsLogLog scale
  • Crow-AMSAA Failure PlotExample© Sioux Embedded Systems | Confidential | 20100 extra startups10 failuresBest fit
  • Crow-AMSAA MTBF PlotExample© Sioux Embedded Systems | Confidential | 21Monitor trendMake predictionsMeanTimeBetweenFailures
  •  >100 reliability hits identified Which ones would have slipped through other tests? Which ones would the customer complain about? “Independent” analysis of hits: 8 would have been in system test, but not earlier 7 would not have been found, but customer wouldcomplain (and fix would be necessary)ROI© Sioux Embedded Systems | Confidential | 22
  •  ROI:(8 x X1) + (7 x X2) – costs > 0 Costs (man-hours + material) = 200K Euro X1: costs of defect found in system test: 10K Euro X2: costs of field defect: 200K Euro 80K + 1.4M – 200K  1.2M Euro saved More money and time became available…Implementing/executing more testsMore projects/productsROI© Sioux Embedded Systems | Confidential | 23
  • © Sioux Embedded Systems | Confidential | 24Results Numerous reliability hits identified + solved MTBF measured and predicted Startup MTBF increased by factor 7.6 Acquisition MTBF incr. by factor 18 More testing hours on systems Customer satisfaction More projects wanted this approach Only 5 system test cycles remaining (was 15)24
  • © Sioux Embedded Systems | Confidential | 25Key success factors Choose right project at the right time Incremental development (early visiblebenefit) Communication / ROI Clear and simple reporting (Crow-AMSAA) Hardware interfacesLow probe effect (not a single false alarm)Easy ported to different products25
  • © Sioux Embedded Systems | Confidential | 26Questions26 This case study is described in detail:Dorothy Graham & Mark FewsterExperiences of Test AutomationCase studies of software test automationISBN 0321754069
  • www.sioux.eubryan.bakker@sioux.eu+31 (0)40 26 77 100