• Like
  • Save

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this document? Why not share!

product assurance capability

on

  • 671 views

 

Statistics

Views

Total Views
671
Views on SlideShare
671
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

     product assurance capability product assurance capability Document Transcript

    • The Journal of the Second Quarter - 2004 Reliability Analysis Center 3Not All Lessons Learned Systems Are Created Product AssuranceEqual Capability (PAC) Quantified By: Todd Post, Editor of ASK MagazineIntroduction story. Stories usually focus on specific subject 7 INASA’s Academy of Program and Project matter: how to use prototyping as a tool to com- The ReliabilityLeadership (APPL) has attempted a unique way municate better with a customer, how to let go of a Implications of Nof capturing and disseminating lessons learned. popular person on a project without impacting the Emerging HumanASK Magazine, an APPL product, collects les- performance of the teammates and jeopardizing the Interface project, or how to tailor a review so that it can be a Ssons learned in the form of stories told by man- Technologiesagers and other project practitioners. Published learning experience as much as a milestone. And,bimonthly, ASK disseminates the lessons inside as these examples show, the stories can be as vari- 14 Iand out of NASA via a print publication and a ous as there are issues to wrestle with on projects. A Strategy forweb site (Reference 1). Each bimonthly issue of ASK contains approxi- Simultaneous D mately ten stories, and there are eighteen issues to Evaluation of MultipleThe APPL team members working on ASK capture Objectivesthe lessons by inviting project managers, mostly date (June ’04). The print and online versions of E the magazine are intended to complement onefrom NASA, but also from other government agen- 19cies, industry, and sometimes academia, to tell their another. The print edition, an attractive 40-page volume, has a circulation of 6,000 readers, and Sole 2004 - “Futurestories about what happened on projects, and to Logistics: Thereflect on what they have learned while telling the brings fresh lessons to them every two months. Integrated Enterprise” 19 RAC Product News 21 Future Events 22 From the Editor 22 RMSQ Headlines 23 Upcoming November Training Figure 1. ASK Magazine Lessons Learned Page <http://appl.nasa.gov/ask/archives/searchbylesson/index.html> RAC is a DoD Information Analysis Center Sponsored by the Defense Technical Information Center
    • The Journal of the Reliability Analysis Center Often the issues are focused on singular themes, and recent ones Not Any Lesson Will Do have included prototyping, reviews, project handoffs, and soft- At NASA, lessons packaged as stories are now considered legit- ware project management. The online edition archives all of the imate. This does not reflect, the author would argue, a predispo- back issues, making ASK lessons available to anyone with an sition towards lessons learned in general. Let us consider the Internet connection. evidence. In November of 2000, the author was hired as the editor of ASK. In January 2002, the GAO released a report, “NASA: Better He was not surprised then to learn that ASK was the first story- Mechanisms Needed for Sharing Lessons Learned,” which painted telling publication attempting to capture lessons learned. Today, a discouraging picture of knowledge sharing within the agency. however, he is surprised that it remains the only one to his knowl- “Although NASA requires managers to regularly share important edge. While storytelling now has several enthusiasts in the knowl- lessons learned from past projects through an agency-wide data- edge management community, there are scarcely more than a few base,” Government Executive reported shortly after the GAO successful examples of how it has been integrated into an organi- report was released, “only 23 percent of managers surveyed had zation’s culture. The author knows little about the dynamics of ever entered information into the system” (Reference 5). NASA other organizations but has thought a lot about storytelling and les- managers explained that they neglected the database, known as the sons learned and why NASA, particularly APPL, appreciates the Lessons Learned Information System (LLIS), because it was diffi- relationship between the two as it relates to project management. cult to navigate and it failed to provide them with useful lessons. It is APPL’s view that a lessons-learned system that prescribes a Improving the architecture of a database is simple enough. The solution to a project management problem is flawed from the real problem was it did not provide useful information. If the start. There is always nuance. The ASK audience astutely recog- system provides value, it’s likely to get used regardless of defi- nizes that no single management problem is identical to others. ciencies in its navigation. What Constitutes a Lesson? Unfortunately, one inference suggested by the GAO report was We borrow much of our thinking about lessons learned from that NASA managers don’t want to share knowledge. Frankly, Donald Schon, whose books The Reflective Practitioner and the author finds that at odds with what he has seen since he start- Educating the Reflective Practitioner lay the groundwork for our ed working with NASA project managers on ASK. Who doesn’t work with storytelling (Reference 2). In much broader terms than recall during the Mars encounters the images inside the control we apply to project management, Schon argues, “Reflection-in- room at the Jet Propulsion Laboratory of people congratulating action…is central to the art through which practitioners cope with one another, jumping up and down even, when the Spirit and the troublesome divergent situations of practice” (Reference 3). Opportunity rovers delivered signals of their successful land- ings—or for that matter, the images of agency members consol- And this is what stories do. They provide a space for reflection. ing one another as they mourned the loss of the Space Shuttle A project manager who is challenged to bring off a deliverable on Columbia crew? Clearly, this is not an organization devoid of time and on budget can listen to one of his peers tell a story about camaraderie and shared mission. similar challenges. The project manager who reflects on the story he has just heard can compare this to his own experience. The success of ASK—6000 print issues published bimonthly, another 8,000 people receiving the electronic edition—should It is worth noting that no place in Schon’s work does he talk not surprise anyone who understands that a lessons learned sys- about storytelling as a means of capturing lessons learned. tem that is genuinely useful is a sure winner in any organization. Nevertheless, we do not believe we are skewing his message to In the case of NASA, they know a good thing when they see it. suit our paradigm. Our subjects, like Schon’s, are practitioners, each of whom “has to see on his own behalf and in his own way The Challenge to Your Organization the relations between means and methods employed and results There is one thing we should recognize about project work in most achieved. Nobody else can see for him, and he can’t see just by organizations: an overwhelming number of people who perform being ‘told,’ although the right kind of telling may guide his see- this work are practitioners. Across all levels of project work, peo- ing and thus help him see what he needs to see” (Reference 4). ple learn quicker, smarter, and far better by reflecting on their workplace experiences than by consulting theory. Either they can In this way, stories are another form of observation. Reading sto- do this in the privacy of their own thoughts, or far more effective- ries of how expert practitioners have solved problems requires ly, as this author would argue, by reflecting on their experiences even more from the observer. Stories demand we engage with with—and listening to the experiences of—their colleagues. the protagonists. By reflecting on the stories told by other proj- ect practitioners, you reframe your experience and think about it Again, stories stimulate reflection. “How might I address that against the context of the story being told. That’s what makes a issue?” asks the practitioner of him or herself upon hearing a col- story a more gratifying learning experience than many other les- league tell a story about a workplace challenge. That, in turn, sons learned models—because it requires active participation. leads to the obvious next step: “How have I handled similar2 Second Quarter - 2004
    • The Journal of the Reliability Analysis Centerchallenges?” This holds true not only at NASA. The author is Editor-in-Chief Dr. Alexander Laufer (including a book, Projectconfident that you have witnessed this, too, in your own organi- Management Success Stories) (Reference 7), gave him faith andzation among all levels of practitioners. confidence that with time the magazine would find wide accept- ance, and it has. Testimonials about the efficacy of ASK lessons“Because professionalism is still mainly identified with technical run the gamut from cog engineers to center directors and associ-expertise,” writes Donald Schon in The Reflective Practitioner, ate administrators. The push is on throughout the Federal gov-“reflection-in-action is not generally accepted—even by those who ernment and across industry to capture knowledge, and finddo it—as a legitimate form of professional knowing” (Reference mechanisms like ASK to get that knowledge to the people who6). We should appreciate Schon’s insight here, because it may help need it. And so that’s our story. Are you reflecting on yours?to explain why it has been so difficult for other storytelling initia-tives to get off the ground, even where there are enthusiastic pro- Referencesponents for storytelling within the organization. 1. ASK Magazine can be accessed at <http://appl.nasa. gov/ask>.There is much to be said for consistency. Our work on ASK 2. Schon, D.A., The Reflective Practitioner, Basic Books, NewMagazine did not come crashing out of the gate with broad accept- York, 1983; Schon, D.A., Educating the Reflectiveance throughout NASA. It was new, it was different, and there Practitioner, John Wiley & Sons, Inc., San Francisco, 1987.was already some cynicism, based on the ineffectiveness of the 3. D.A. Schon, (1983), p. 62.LLIS, about initiatives aimed at providing lessons to help practi- 4. D.A. Schon, (1987), p. 17.tioners to do their jobs better. That we arrive in NASA mailbox- 5. <http://www.govexec.com/dailyfed/0202/021102m1.htm>.es every two months with stories by the “best of the best practi- 6. D.A. Schon, (1983), p. 69.tioners” has gone a long way towards winning over skeptics. 7. A. Laufer and E. Hoffman, (2000) Project Management Success Stories: Lessons of Project Leaders, John Wiley &Like most fledgling initiatives, we began on a small scale, start- Sons, New York.ing initially as a web-based publication and with a distributionlist of a few hundred, mostly NASA project managers who werealready happy customers of other APPL products. A print publi- About the Authorcation followed for marketing purposes, and to address the most Todd Post is the editor of ASK Magazine, and has publishedcommon observation about ASK’s early issues: “I wish there was other articles about ASK in Knowledge Management (Dec.a way I could read these stories while I was on the plane.” ‘01/Jan. ’02), the Knowledge Management Review (March/April ’02), and Program Manager (Jan./Feb. ’03). He welcomes yourASK was fortunate to have a sponsor in APPL Director Dr. comments on this article and about ASK Magazine atEdward Hoffman, whose own work on storytelling with ASK <tpost@edutechltd.com>. Product Assurance Capability (PAC) Quantified By: Ananda Perera, Honeywell Engines Systems & ServicesIntroduction ments of: human factors, design, hazard analyses, failure modeThe Product Assurance (Reliability, Maintainability, and Quality and effect analyses, test plans, and procedures. Quantifying aAssurance (RM&QA)) programs are an integral part of the contrac- single QA metric is difficult; however R&M together can betor (supplier) operations and, as such, are planned and developed in quantified and it is called Product Assurance Capability (PAC).conjunction with other activities to attain the following goals: Product Assurance Capability is defined as the combined a. Recognize RM&QA aspects of all programs and provide Probability that an Item will perform its required functions an organized approach to achieve them. for the duration of a specified mission profile and that the b. Ensure RM&QA requirements are implemented and com- repair action under given conditions of use is carried out pleted throughout all program phases of design, develop- within a stated time interval. ment, processing, assembly, test and checkout, and use activities. Many times, reliability is represented by MTBF and maintain- c. Provide for the detection, documentation, and analysis of ability is represented by MTTR. These metrics are used to cal- actual and potential discrepancies, system(s) incompati- culate Inherent Availability (Ai) which shows the capability of bility, marginal reliability, maintainability and quality, the end-unit for service. Inherent Availability is the probability and trends that may result in unsatisfactory conditions. that the system/equipment is operating satisfactorily at any point in time when used under stated conditions, where the time con-The RM&QA program provides for participation, by RM&QA sidered is operating time and active repair time.personnel, in all phases of the design, development, and manu- MTBFfacturing process. This effort should include reviews and assess- Ai = MTBF + MTTR Second Quarter - 2004 3
    • The Journal of the Reliability Analysis Center Ai becomes a useful term to describe combined reliability and b. Embody R&M design criteria, and evolve a system that is maintainability characteristics. Since this definition of availabil- no more complex than is adequate to satisfy its perform- ity is easily measured, it is frequently used as a contract-speci- ance requirements. fied requirement; however it is not a good Product Assurance c. Ensure that the mechanisms of failure and their effects are Capability metric. thoroughly analyzed and understood, that critical features are identified and that the design process aims to reduce Reliability and Maintainability (R&M) Design the effects of failure modes where possible. d. Utilize materials and components that are procured to Philosophy approved quality standards and ensure that, in applica- Reliable equipment has a high probability of performing its tion, they are subject to stresses that are well within their required function without failure for a stated period of time when strength/rating capabilities. subjected to specified operational conditions of use and environ- e. Take producibility into account, ensuring that as far as pos- ment. The operational use and environment, therefore, need to sible, the design is insensitive to the expected variability of be taken into account at the outset of the design process. The the materials, components and production processes. design should also be robust to expected variations in production f. Generate a system that is easy to test, for which failures processes and quality of materials and components. are accurately diagnosed and isolated, with a configura- tion that facilitates easy maintenance and repair under The ease with which the equipment can be returned to usable con- field conditions, including the appropriate level of inte- dition after failure and the time needed for preventive mainte- grated diagnostic capability (Built-in-Test (BIT)). nance are important design criteria. Those items which need to be removed, adjusted, or inspected most often, for whatever reason, The Objective of Quality Assurance (QA) should have the easiest accessibility, so maintainability design is The objective of Quality Assurance is to provide adequate confi- significantly reliability-driven but not reliability-dependent. dence to the customer that the end product or service satisfies the requirements. R&M, then, are related activities that need to be fully integrated into all other project activities. Treating R&M subsequent to The Quality Assurance policy is to ensure, in conjunction with design can lead to a situation where the unreliability and inferior other integrated project and Product Assurance functions, that supportability are discovered at the end of development, with the required quality is specified, designed-in and will be incorporat- consequent remedial action causing expense and delay. ed, verified and maintained in the relevant hardware, software and associated documentation throughout all project phases, by Reliability and maintainability drive the logistics support aspects applying a program where: and hence have a significant effect on the life cycle cost of the equipment/system. • Assurance is provided that all requirements are adequate- ly specified. R&M General Considerations • Design rules and methods are consistent with the project R&M design philosophy should be applied at all stages of the requirements. project life cycle, from initial conceptual studies through to the • Each applicable requirement is verified through a verifi- In-Service phase. R&M directly affect both operational effec- cation program that includes one or more of the following tiveness and life cycle cost and merit equal consideration with methods: analysis, inspection, test, review of design, and other parameters such as performance, acquisition cost, and proj- audits. ect time scale. It requires that the contractor should integrate • Design and performance requirements including the spec- R&M aspects into each stage of the design activity. ified margin are demonstrated through a qualification process. At the conceptual stage, the R&M requirements should be con- • Assurance is provided that the design is producible and sidered at the same time as the performance parameters. They repeatable, and that the specification of the resulting should be justified in terms of operational need (e.g.: probabili- product can be verified and operated within the required ty of mission success, available maintenance manpower), so that operating limits. they will receive due consideration in any subsequent trade-off. • Adequate controls are established for the procurement of As the operational concept develops the R&M requirements components, materials, software and hardware items, and should be reviewed. services. • Fabrication, integration, test and maintenance are con- The design procedure should: ducted in a controlled manner such that the end item con- forms to the applicable baseline. a. Ensure that an analysis is conducted of the operating and • A nonconformance control system is established and environmental conditions, and also ensure that system and maintained to track non conformances systematically and sub-system design specifications incorporate the results. to prevent reoccurrence.4 Second Quarter - 2004
    • The Journal of the Reliability Analysis Center • Quality records are maintained and analyzed to report and HDBK-217. In this method, each generic type of component detect trends in a timely manner to support preventive is assigned a basic failure rate that depends on component type and corrective maintenance actions. and operational environment. The basic failure rate can be • Inspection, measuring and test equipment and tools are adjusted by multiplying it by π factors that account for pre- controlled to be accurate for their application. sumed component quality, manufacturing learning curve, etc. • Procedures and instructions are established that provide RAC findings have shown that failures also stem from non- for the identification, segregation, handling, packaging, component causes, namely design deficiencies, manufacturing preservation, storage and transportation of all items. defects, poor system management techniques etc. The RAC • Assurance that the operations including post-mission and PRISM methodology determines an initial base failure rate disposal are carried out in a controlled way and in accor- based on PRISM component models. This failure rate is then dance with the relevant requirements. modified with system level process assessment factors to give a truer failure rate prediction.R&M Engineering Functions and Tasks 2. The mean time to repair (MTTR) is perhaps the most com-An essential task for an R&M Engineer is estimating the preci- mon and most useful measure of maintainability. It is oftension of an estimate (say MTBF, MTTR). This is an important included in system or product specifications because it’s eas-task leading to the use of Confidence Intervals. ier to visualize an average than a probability distribution, and the mean is also easier to include in calculations than a distri-When we use two-sided confidence bounds (or intervals), we are bution function would be. In general MTTR of a system is anlooking at a closed interval where a certain percentage of the estimated average elapsed time required to perform correctivepopulation indicating a result is likely to lie. For example, when maintenance, which consists of fault isolation and correction.dealing with 90% two-sided confidence bounds of (X, Y), we are For analysis purposes, fault correction is divided into disas-saying that 90% of the population lies between X and Y. sembly, interchange, re-assembly, alignment, and checkout tasks. MTTR is a useful parameter that should be used earlyOne-sided confidence bounds are essentially an open-ended ver- in planning and designing stages of a system. The parametersion of two-sided bounds. A one-sided bound defines the point is used in assessing the accessibility and locations of systemwhere a certain percentage of the population is either higher or components, and it highlights those areas of a system thatlower than the defined point. Most of the time one-sided confi- exhibit poor maintainability in order to justify improvement,dence bounds are used for MTBF and MTTR estimates. MTBF modifications, or a change of design. The assessed or esti-is calculated at lower (Why? Usually the upper boundary is not mated (Estimating methods are available in MIL-HDBK-known; If the true MTBF is greater than the “lower”, the cus- 472) MTTR helps in calculating the life cycle cost of a sys-tomer will be “happy”) one-sided limit and MTTR is calculated tem, which includes cost of the average time techniciansat upper (Why? Usually the lower boundary is not known; If the spend on a repair task.true MTTR is less than the “upper”, the customer will be“happy”) one-sided limit. The Chi-Square (χ2) Distribution can 3. Testability is a measure of the ability to detect system faultsbe used to find the confidence intervals of the MTBF or MTTR. and to isolate them at the lowest replaceable component(s).When there are no failures in a time period, the Chi-Square The speed with which faults are diagnosed can greatly influ-Distribution is used to find the MTBF at the lower bound. ence downtime and maintenance costs. As technology advances continue to increase the capability and complexity1. Reliability prediction is a process of mathematically combin- of systems, use of automatic diagnostics as a means of Fault ing the parts and elements of a system to obtain a single Detection Isolation and Recovery (FDIR) substantially numerical figure that represents the system’s probability of reduces the need for highly trained maintenance personnel success. In reliability prediction, we usually assume that all and can decrease maintenance costs by reducing the erro- components are required for successful system operation, neous replacement of non-faulty equipment. FDIR systems resulting in the use of a series reliability model for prediction include both internal diagnostic systems, referred to as built- of system reliability. Since we’re using a series model, we can in-test or built-in-test-equipment (BITE), and external diag- predict such parameters as MTBF (MTTF), but the model nostic systems, referred to as automatic test equipment should not be used for operational reliability parameters such (ATE). BIT Effectiveness (BITEFF) is the probability of as MTBCF, unless the effect of redundant components is obtaining the correct operational status of the system using included in the calculation. The goal should be to try to pre- BIT. It is a function of: Total System Failure Rate (λ), Fault dict system behavior at least to the extent necessary to identi- Detection Capability (FDC), False Alarm Probability (FAP), fy possible risk areas or areas where the system reliability and The operating time (T) required to conduct BIT. needs to improve to meet requirements. BITEFF is expressed by the following mathematical function: One method of reliability prediction still popular in the defense Minimum [Worst Case, when T → ∞] BITEFF = FDC/(1 + contractor community (popular because a lot of contractors are experienced with using it, not necessarily because it’s particu- larly good) is the Parts Count and Parts Stress method of MIL- [e 1 ]+ 1 + FAP  * [1 - e -λ*T*( + FAP ) FDC 1 ] -λ*T*( + FAP ) Second Quarter - 2004 5
    • The Journal of the Reliability Analysis Center FAP). If BITEFF is high, Repair Times and MTTR will be parameters, a scale parameter, the characteristic life, η, and a reduced. Of potential concern is the fact that false alarms shape parameter, β. The characteristic life, η, is the same as the and removals create a lack of confidence in the BIT system mean time to failure when β = 1. Often η is replaced for compu- to the point where maintenance or operations personnel may tational convenience by its inverse, λ = 1/η, which can be defined ignore fault detection indications. as the failure rate. The two-parameter Weibull distribution is given by f(t) = β/η (t/η)β-1exp-(t/η)β, t ≥ 0. The reliability func- Product Assurance Capability (PAC) Model tion is R(t) = exp-(t/η)β. Description One reason for the popularity of the Weibull distribution is that The PAC metric is a combination of Reliability and times to failure are better described by the Weibull distribution Maintainability Functions based on the Weibull Distribution. than the exponential. For physics of failure approaches to relia- The R&M equations are shown in Figures 1 and 2. In these bility, the Weibull distribution is preferred. An advantage of the equations Γ(1/β + 1) is the Gamma Function evaluated at the Weibull distribution is that it represents a whole family of value of (1/β + 1). The “Mathcad” software is used to calculate curves, which, depending on the choice of β, can represent many R&M values and the PAC values shown in the figures and sum- other distributions. For example, if β = 1, the Weibull distribu- marized in Table 1. tion is exactly the one-parameter exponential distribution. A β of approximately 3.3 gives a curve that is very close to the normal T 2 Specified Mission Time (Hours) distribution. The infant mortality and wear-out portions of the MTBF 10000, 12000 .. 30000 Mean Time (Hours) Between Failure Range bathtub curve can often be represented by the proper Weibull dis- β 1 Weibull Shape ( Exponential ) Parameter tribution. In the three-parameter Weibull distribution, a location T .Γ 1 1 β parameter, γ, is used to account for an initial failure-free operat- β Reliability (R) Function RMTBF e MTBF ing period or prior use (e.g., burn-in). MTBF RMTBF 0.99995 0.99980002 1 .10 4 In the R&M Functions given in Figures 1 and 2, β = 1 and β = 0.99991 0.99983335 1.2 .10 4 3.3 are selected, and can be assumed for new equipment. For 1.4 .10 0.99985715 4 0.99987501 equipment already in use, Weibull analysis of the failure and 1.6 .10 repair data needs to be performed to obtain true β values. The 0.99987 4 R 0.9998889 MTBF 1.8 .10 4 0.9999 0.99983 calculations of the reliability metrics and maintainability metrics 2 .10 0.9999091 4 0.99979 0.99991667 2.2 .10 4 are shown in Figures 1 and 2, respectively. Table 1 summarizes 0.99992308 0.99992857 2.4 .10 4 the Ai and PAC metrics. 0.99975 2.6 .10 4 4 4 4 4 4 4 1 10 1.4 10 1.8 10 2.2 10 2.6 10 3 10 0.99993334 MTBF 2.8 .10 4 Table 1. Comparison of Ai, and PAC Metrics Using Different 3 .10 4 Combinations of MTBF, MTTR, t, and T MTBF1 MTTR2 Ai PAC t2 T1 Figure 1. Calculation of Reliability Metrics 30,000 20 0.999989 0.998906 3 40 2 t 40 Required Restoration Time (Minutes) 10,000 20 0.999967 0.930108 30 1.5 MTTR 14 , 16 .. 32 Mean Time (Minutes) To Repair Range 5,0004 204 0.9999334 0.5026274 204 14 β 3.3 Weibull Shape ( Normal ) Parameter 30,000 15 0.999992 0.999933 40 2 β t .Γ 1 1 M MTTR 1 e MTTR β Maintainability (M) Function 30,000 10 0.999994 0.999950 30 1.5 M MTTR MTTR 30,000 5 0.999997 0.999966 20 1 1 1 14 0.99999943 16 Notes: 1. In hours. 0.94 0.99994122 18 2. In minutes. 0.99897267 20 0.88 0.99342131 22 3. This means in the given operational environment, 9,989 M MTTR 0.97694776 24 out of 10,000 systems are available for service at any time 0.82 0.94469247 26 0.89635823 28 in the useful life period. 0.76 0.8355667 30 4. For this combination of MTBF, MTTR, t & T the differ- 0.76752148 32 0.7 ence between Inherent Availability and Product Assurance 10 14.4 18.8 23.2 27.6 32 MTTR Capability is high. Product Assurance Capability P R 30000 .M 20 with MTBF=30000 Hrs & MTTR = 20 Minutes is ------> P = 0.99890608 Summary To achieve high operational effectiveness with low life cycle Figure 2. Calculation of Maintainability Metrics and Product cost, the RM&QA of systems should be given full consideration Assurance Capability at all stages of the procurement cycle. This process should begin at the concept stage of the project and be continued in a disci- The Weibull, distribution has gained popularity as a time-to-fail- plined manner as an integral part of the design, development, ure distribution. The Weibull distribution is characterized by two production, and testing process and subsequently into service.6 Second Quarter - 2004
    • The Journal of the Reliability Analysis CenterOperational (Mission & Restoration) Success R&M parameters MTTF stands for Mean Time To Failure and is represented by therelate to the probability of failures occurring during a Mission mean life value for a failure distribution of non-repairable units.Time that would cause an interruption of that Mission and to theprobability of correcting these failures during the required MTBF stands for Mean Time Between Failure and is representedRestoration Time. The PAC Metric represents the overall by the mean life value for a failure distribution of repairable units.Operational Success and can be calculated using predicted MTBCF stands for Mean Time Between Critical Failure, and isand/or estimated MTBF & MTTR Data. If there is an Inherent the average time between failures which causes a loss of a sys-Availability requirement, it is recommended that the PAC Metric tem function defined as “critical” by the user.be used for accuracy and good customer satisfaction. MTTR stands for Mean Time To Repair and is represented byGlossary of Terms the mean life value for a distribution of repair times.Reliability is the probability that an item can perform its function Availability is a performance criterion for repairable systems thatunder stated conditions for a given amount of time without failure. accounts for both the reliability and maintainability properties of aMaintainability is the probability that an item can be retained in, component or system. It is defined as the probability that a systemor restored to, a specified condition when maintenance is per- is not failed or undergoing a repair action when it needs to be used.formed by personnel having specified skill levels, using prescribed Mission Time is the portion of the up time required to perform aprocedures and resources, at each prescribed level of maintenance specified mission profile.and repair. The term is also used to denote the discipline of study-ing and improving the maintainability of products, (e.g., by reduc- Restoration Time is the time taken to restore the delivery ofing the amount of time required to diagnose and repair failures). service, when the repair is carried out by an adequately skilled (Continued on page 23) The Reliability Implications of Emerging Human Interface Technologies By: Kenneth P. LaSala, Ph.D., KPL SystemsIntroduction objective of this research is to create a multi-position, brain-con-This article discusses the reliability aspects of several emerging trolled switch that is activated by signals measured directly fromtypes of human-machine interfaces. These new interfaces are sub- an individual’s brain. By fitting subjects with an electroen-stantially different from the now common interfaces of keyboards, cephalograph (EEG) and training the students for approximatelymice, touch pads, and touch screens and the less common voice- 200 hours, the scientists have been able to get the students todriven interfaces. Readers who desire to acquaint or re-acquaint move simple objects on a computer screen. The scientists rec-themselves with the fundamentals of the current common types of ognize that the interface must be able to determine the intentioninterfaces are encouraged to consult the RAC guide entitled “A of the human in a single reading of brain waves. This requiresPractical Guide to Developing Reliable Human-Machine Systems filtering out noise produced by both the brain and the EEGand Processes,” (Order No. RAC-HDBK-1190, HUMAN). Those equipment. Two current disadvantages of the current EEGwho desire a more interactive discussion of the fundamentals and approach are that the EEG equipment is too expensive for com-a more extensive discussion of the new technologies should con- mercial use yet and that a conductive gel is required to ensure asider the RAC Human Factors (reliability-oriented) short course. good electrical interface. Figure 1 illustrates the configurationInformation on the referenced guide and course can be found at the for EEG-based control of computers.RAC web site at <http://rac.alionscience.com>.EEG-Based Computer ControlOne of the most exciting developments in human-machine inter-faces is implementing the control of computers by humanthought. Based on the fact that the brain prepares for a movinga limb a full half-second before the limb actually is moved, com-puter scientists at the Fraunhofer Institute for ComputerArchitecture and Software Technology and the BenjaminFranklin University Clinic, both in Berlin, and the University ofBritish Columbia (Reference 1) among others have been investi-gating controlling computers by thought alone. The long-term Figure 1. EEG-Based Control of Computers (Continued on page 10) Second Quarter - 2004 7
    • Consider... Your product is having majorproblems at a key customer site and your customer is losing faith. Your warranty costs doubled last month and your VP calls to ask you why. Your customer is asking forreliability and availability numbers and your reliability expert just left the company. Do you have a solution? www.relexsoftware.com
    • Comprehensive Software Tools Relex ® q PREDICTIVE TOOLS MIL-HDBK-217The Solution Provider For Global Reliability Management Telcordia/Bellcore RDF 2000 HRD5, 299B q PRISMBuilding Solutions One Customer At A Time Fully Integrated q CORRECTIVE ACTIONAt Relex Software we realize that every customer has aunique vision to promote reliability. By working together FRACAS q MODELING TOOLSto analyze your needs, Relex forges a partnership to builda complete, manageable solution for your reliability RBD, Markovrequirements. By taking advantage of our world-class tools, Simulationservices, and support, you can rely on Relex to help you Optimization q RISK ASSESSMENTachieve your reliability management goals. Fault Tree FMEA/FMECA q R&M SUPPORT q Thousands of Satisfied Customers Since 1986 Weibull q Comprehensive Services Maintainability Offerings Life Cycle Cost q World-Class Predictive Analytics Professional q Commitment to Quality Services q RELIABILITYThe Quality Choice Professionals Rely On CONSULTINGRelex Software, a trusted and proven market leader, is used by thousands of q TRAININGprofessionals worldwide to improve product reliability and quality. Whether your q IMPLEMENTATIONneeds span a small team solution or a large-scale, enterprise-wide deployment, SERVICESRelex Software is your partner for success. q TECHNICAL SUPPORT Quality Assurance q ISO 9001:2000 Relex Software CERTIFICATION q TickIT 2000 www.relexsoftware.com STANDARD 724.836.8800 q ASQ CERTIFIED RELIABILITY ENGINEERS Call today to speak with one of our solution consultants. ON STAFF
    • The Journal of the Reliability Analysis Center The Reliability Implications of . . . (Continued from page 7) To understand the reliability implications of this new technology, The sensing function can be modeled in two forms according to the one should examine the cognitive model of the human. The cog- sensory modes selected for the task. Note that the most commonly nitive model is the most convenient model to use for evaluating used modes are visual, auditory, and tactile. The first form is what the reliability of the human. Other forms, such as a servo-con- a reliability engineer would consider a “parallel” mode. In this troller model could be used, but they tend to measure generalized form, one or more of the sensory modes is used in an “or” manner. model parameters without providing the insight into the mental The other form is when all of the selected sensory modes are process that the cognitive model provides. Figure 2 illustrates required for the task, a serial mode that represents an “and” condi- the basic cognitive model plus some other influences. tion. There are two mechanisms for sensing: similarity matching and frequency gambling. Similarity matching is sensing based on The basic cognitive model consists of long- and short-term mem- the similarity of the sensed subject to a previously sensed item; e.g., ory, a sensing function, an information-processing function, a sensing a traffic “stop” sign at an intersection. Frequency gambling decision function, and a response function. With the exception is a heightened awareness because a stimulus is likely to happen of memory, the skill level of the human affects all of the func- again; i.e., repeated occurrence of a stimulus. Figure 3 illustrates tions in the cognitive model. Time and environmental factors how the approach to sensory inputs can be modeled. also affect the overall task performance response. Three aspects of the cognitive model are worth noting: Information processing and decision-making are both based on rule-based action and comparison-based action. In rule-based • Data can be obtained in some form for the sensory and action, the human has been given a set of processing or decision motor elements of the cognitive model rules for application. In comparison-based action, information is • There may be information processing and decision mod- processed and decisions are made based on comparisons with els that can be applied previous experience. In this approach, the reliability of the infor- • The cognitive model “plugs” very nicely into reliability mation processing and decision-making functions can be related block diagrams and fault trees (as bottom events) to the skill level, education, training, and experience of human system components. Memory Traditional Cognitive Model Information Sensing Decision Response Processing Environmental Factors Duration Ambient Noise Skill Temperature Oxygen Pressure Humidity Illumination Vibration Figure 2. Traditional Cognitive Model Plus Other Influences (Adapted from Reliable Human-Machine Systems Developer Training Course, KPL Systems, 1997) Detection = Any sensory mode Detection = Combinations of sensory modes e.g., all sensory modes Visual Auditory Visual Auditory Tactile Tactile 3 3 R = 1 - Π(1 - R I ) R = Π RI i=1 i=1 Figure 3. Models of the Sensing Function (Adapted from Reliable Human-Machine Systems Developer Training Course, KPL Systems, 1997)10 Second Quarter - 2004
    • The Journal of the Reliability Analysis CenterResponses generally take one of three forms: a speech response, a subjected yet to the type of testing that would support the use ofmotor response that involves some visual activity to locate a target test-based software reliability prediction methods. If the EEG-and some motor action associated with the target, or a combined based path is to reach commercialization, then certainly therespeech-motor response in which both motor action and verbal con- must be a software reliability assessment effort. On the otherfirmation are required. The reliability of motor activities is driven hand, for the lower path, the reliabilities of the hardware and theby the complexity of the action and the skill of the performer. software are well known. The intricate part of assessing the reli-Figure 4 illustrates the most common options for responses. ability of this path is estimating the reliabilities of the motor and visual elements. The reliabilities of these elements are driven byThis article addresses only the human-computer interface. For factors discussed in the above referenced RAC handbook andmost common situations, this only involves the middle line of other human factors engineering resources.Figure 4 – a motor-visual response. Although the figure does notshow it, there also is a feedback loop that returns back to the sen- However, with the exception of the REHMS-D software developedsory (visual, in this case) input part of the cognitive model. For by the author, methods for estimating the reliabilities of these ele-simplicity, this article will neglect the feedback loop. With a ments based on physical design factors are not readily available.focus on the response portion of the cognitive model, the EEG- More information about REHMS-D can be found in Section 5.2.8based computer control provides the upper response path shown of the referenced RAC handbook. Perhaps the simplest approachin Figure 5. This path is derived from information in the refer- to a comparative reliability evaluation of the two paths would beenced University of British Columbia site (Reference 1). The an empirical one. The reader should note that the above discussionlower path in that figure shows the traditional response path. assumes that the human mental functions proceed correctly. Certainly, human misdirection or other forms of incorrect humanThere is an interesting contrast in the two paths in terms of their mental function would affect either path adversely.respective reliabilities. One can obtain an estimate of the relia-bility of the upper path by applying appropriate electronic hard- Functional Magnetic Resonance Imagingware prediction methods such as PRISM, if the failure rate data The concept of reading the mind for a variety of purposes is notfor all of the components is accessible, and by estimating the particularly new, but current technology greatly enhances thereliability of the sophisticated software. The software estimation ability of performance monitors and researchers to do so. In par-is not likely to be easy because the capability of the software is ticular, Magnetic Resonance Imaging (MRI) of the brain isstill evolving. It is not clear that the EEG path software has been extensively recognized for its excellent spatial resolution that Speech Motor Visual Motor Visual Speech Figure 4. Common Response Options (Adapted from Reliable Human-Machine Systems Developer Training Course, KPL Systems, 1997) EEG-Based Interface Electronic hardware reliability factors Interface & CPU reliability factors Electronic Cap A:D > D:A Amplifier Software & Electronics Interface Moved object Information Decision on screen Processing Standard Visual/Motor Interface Motor Visual Software Motor reliability Visual reliability Input device Interface & CPU factors factors reliability reliability factors factors Figure 5. A Comparison of EEG-Based and Traditional Computer Control Second Quarter - 2004 11
    • The Journal of the Reliability Analysis Center allows neurological anatomic structures to be viewed in sharp not just attach a head set to an in situ operator at the present time. detail. MRI is a technique for determining which parts of the However, one near-term approach could be to profile operators brain are activated by different types of physical sensation or before they arrive on station and then to profile them again after a activity, such as sight, sound or the movement of a subject’s fin- while, such as on a break, to determine their condition. Of course, gers. MRI is driven by nuclear magnetic resonance. one could develop suitable instrumentation for in-situ monitoring, but this is more of a very long-range target. The third issue The term “functional MRI,” fMRI, usually refers to techniques that requires the development of reliable diagnostic software that image a complete brain slice in 20 ms. This “brain mapping” is could read fMRI scans and determine whether or not abnormali- achieved by setting up an advanced MRI scanner in a special way ties exist. This would be somewhat akin to the software that is so that the increased blood flow to the activated areas of the brain emerging for the diagnosis of mammograms. Examining these shows up on the fMRI scans (see Figure 6). There are several types issues shows that much work is required before fMRI could be of fMRI (Reference 2), of which the BOLD (Blood Oxygen Level used in a practical way for human condition monitoring. Dependent) form is the form most commonly used. Researchers at the University of Pennsylvania (Reference 3) are using fMRI to Automatic Speech Recognition scrutinize the brains of subjects during question-and-answer peri- Automatic speech recognition (ASR) is an evolving technology ods for purposes of lie-detection. These studies require subjects to that is finding its way into applications such as: lie very still in the scanner while performing cognitive tasks. • Dictation systems For those interested in reliable human performance, it is a simple • Voice-based communications such as telebanking, voice- conceptual step to extend to the in-situ performance and condition mail, database-query systems, information retrieval systems monitoring of operators. Not only can performance be recorded • System control - automotive, aircraft and analyzed subsequently, but also operator fitness or condition • Information systems can be monitored in a manner that allows a fatigued operator to be • Security systems - speaker verification replaced before he or she makes incorrect decisions or takes incor- rect actions. Section 6.5 of the above referenced guide provides Research in automatic speech recognition aims to develop meth- additional information about how time-on-station and several bio- ods and techniques that enable computer systems to accept speech logical rhythms can affect human performance reliability. input and to transcribe the recognized utterances into normal orthographic writing (Reference 4). Four basic approaches to There are several major issues that must be accommodated if attain this goal have been followed and tested over the years. condition monitoring is to be considered: • Template-based approaches, where the incoming speech • For each individual, a fMRI “normal” baseline must be is compared with stored units in an effort to find the best established. match • Current equipment requirements preclude in situ moni- • Knowledge-based approaches that attempt to emulate the toring of operators via fMRI. human expert ability to recognize speech • If automated human condition monitoring is desired, then • Stochastic approaches, which exploit the inherent statisti- very sophisticated diagnostic software must be developed. cal properties of the occurrence and co-occurrence of individual speech sounds The first issue is not as direct as it may look. First, one must • Approaches which use networks of a large number of sim- develop fMRIs for the normally functioning brain under specified ple, interconnected nodes which are trained to recognize circumstances. One would expect to develop a range of baseline speech fMRI profiles, not just a single one, to account for the expected range of normal conditions and tasks. Furthermore, each human All of these approaches require the following elements: would require his or her own set of profiles. The second issue is a consequence of the current state of fMRI technology. One can- • Determining optimal speech parameters Figure 6. Example Functional Magnetic Resonance Imaging Scans (From <http://www.musc.edu/psychiatry/fnrd/primer_fmri.htm>)12 Second Quarter - 2004
    • The Journal of the Reliability Analysis Center • Various types of analytical models • Learning and testing algorithms • Hardware/software systemsThere also are speech comprehension considerations such asspeaker identification and speaker verification.As ASR moves toward commercialization, the reliability of eachof the approach elements and speech comprehension considera-tions becomes important. Incorrect interpretation in ASR in tele-banking or system control can have extremely grave conse-quences. While estimating the reliability of the hardware ele-ments of an ASR system can be accomplished by standard pre-diction techniques, estimating the reliability of the complex soft-ware and the embedded models and algorithms that the softwarerepresents is a very complex problem. Also, the potential uses of Figure 7. University of Colorado Animated InterfacesASR suggest that there should be standards and specifications (<http://mailweb.udlap.mx/~ingrid/caminoreal/Cole.ppt>)for ASR systems and that these documents include requisite lev-els of reliability and compliance verification requirements. Haptic (touch-based) feedback now is being explored especially in modern medicine, in which visual-haptic activities play a major role. Sensor-haptic interfaces are playing a significant role in tele-Other Human Interface Technology Developments operation systems. Teleoperation systems are an important tool forWhile brain-computer interaction research is opening a new performing tasks that require the sensor-motor coordination of andimension for human-computer interfaces, most interface operator but where it is physically impossible for an operator toresearch and development is focusing on advanced uses of the undertake such tasks in situ. The vast majority of these devicesmore commonly recognized visual, auditory, and tactile sensory supply the operator with both visual and haptic sensory feedback inmodes. A convenient reference for some of the research in these order that the operator can perform the task at hand as naturally andother interface technologies is Proceedings of the IEEE, fluently as possible and as though physically present at the remoteSeptember 2003, Vol. 91, No. 9. site. Closely related to haptic feedback is haptic holography, (Reference 6) a combination of computational modeling and multi-A popular area of research appears to be animated interfaces, in modal spatial display. Haptic holography combines various holo-which an operator converses with lifelike computer characters graphic displays with a force feedback device to image freestand-that speak, emote, and gesture. An interesting concept in human- ing material surfaces with programmatically prescribed behavior.computer interfaces is the use of animated agents to communicatewith humans (see Figure 7). These agents are life-like charactersthat speak, emote, and gesture. According to Ronald Cole of the ConclusionsUniversity of Colorado (Reference 5) and his IEEE Proceedings Interfaces that are based on reading the human brain have a longco-authors, while technology supports the development of these way to go before they are ready for commercialization. EEG-agents, there is a lack of experimental evidence that animated based computer control appears to be an approach that will matureagents improve human-computer interaction. Since face-to-face sooner than other methods. The fMRI technology has potentialcommunication is the most effective, according to Cole, the conditioning monitoring applications, but it requires much work toobjective is to create interfaces with animated agents that act and bring it to the point of practical use. ASR is evolving rapidly butlook like humans. Speakers’ facial expressions and gestures still requires significant work before it can realize its full potential.enhance the spoken message because audio and visual informa- Animated interfaces can be constructed now with current comput-tion are presented in parallel. This is a multi-dimensional inter- er graphics technologies, but it remains to be seen whether or notface. Much of the work by Cole and others focuses on animated they offer a significant advantage over simpler interfaces.agents that can carry on tutorial, task-oriented dialogs. The potential uses of all of the above described emerging humanAlthough most research on such dialogs has focused on verbal interface technologies demand very high levels of reliability.communication, nonverbal communication can play many impor- There are many opportunities – indeed requisite work – for thetant roles as well, as is suggested in the speech-gesture research conduct of reliability analyses, reliability testing, and the writingmentioned above. The “flip-side” of this research area is having of standards and specifications with reliability requirements.a computer “read” the voice, facial expressions, and gestures ofthe operator for both control and condition monitoring purposes. ReferencesThe referenced RAC handbook provides an introduction to the 1. <http://www.ece.ubc.ca/~garyb/BCI.htm>advantages and disadvantages of multi-dimensional interfaces. 2. <http://www.musc.edu/psychiatry/fnrd/primer_fmri.htm> 3. <http://amishi.com/lab/facilities/> Second Quarter - 2004 13
    • The Journal of the Reliability Analysis Center 4. <http://www.hltcentral.org/page-827.0.shtml> Dr. LaSala was the President of the IEEE Reliability Society dur- 5. <http://www.is.cs.cmu.edu/SpeechSeminar/Slides/ ing 1999-2000 and is the chairman of the IEEE Reliability RonCole-September2003.abstract> Society Human Interface Technology Committee. He also cur- 6. <http://web.media.mit.edu/~wjp/pubs/thesisAbstract.pdf>, rently participates in the DoD Human Factors Engineering W. Plesniak et al. Technical Advisory Group and the DoD Advisory Group on Electron Devices. His publications include several papers on About the Author R&M, systems requirements analysis, and other engineering top- Kenneth LaSala currently is the Director of KPL Systems, an ics. He also is the author of a chapter on human-machine relia- engineering consulting firm that focuses on reliability, maintain- bility in the McGraw-Hill Handbook of Reliability Engineering ability, systems engineering, human factors, information tech- and Management, a co-author of the IEEE video tutorial on nology, and process improvement. Dr. LaSala has over 33 years human reliability, and the author of a MIL-HDBK-338 section of technical and management experience in engineering. He has on the same topic. His research interests include techniques for managed engineering groups and served as a senior technical designing human-machine systems and progressive system engi- staff member in systems engineering, reliability and maintain- neering approaches. He received a B.S. degree in Physics from ability (R&M), and product assurance for the Air Force, the Rensselaer Polytechnic Institute, an M.S. in Physics from Brown Navy, the Army, the Defense Mapping Agency, and NOAA. University, and a Ph.D. in Reliability Engineering from the University of Maryland. A Strategy for Simultaneous Evaluation of Multiple Objectives By: Ranjit K. Roy, PhD., P.E. Introduction for comparing one product performance or process output with Proper measurement and evaluation of performance is the key to another. In experimental studies like the Design of Experiments comparing the performance of products and processes. When (DOE) technique, performances of a set of planned experiments there is only one objective, carefully defined quantitative evalu- are compared to determine the influence of the factors and the ation most often serves the purpose. However, when the product combination of the factor levels that produce the most desirable or process under study is to satisfy multiple objectives, perform- performance. In this case the presence of multiple objectives poses ances of the subject samples can be scientifically compared only a challenge for analysis of results. Inability to treat multiple crite- when the individual criteria of evaluations are combined into a ria of evaluations (measure of multiple performance objectives) single number. This report describes a method in which multi- often renders some planned experiments ineffective. ple objectives are evaluated by combining them into an Overall Evaluation Criteria (OEC). Combining multiple criteria of evaluations into a single number is quite common practice in academic institutions and sporting In engineering and scientific applications, measurements, and events. Consider the method of expressing a Grade Point Average evaluations of performance are everyday affairs. Although there (GPA, a single number) as an indicator of student’s academic per- are situations where measured performances are expressed in formance. The GPA is simply determined by averaging the GPA terms of attributes such as Good, Poor, Acceptable, Deficient, of all courses (such as scores in Math, Physics, or Chemistry – etc., most evaluations can be expressed in terms of numerical individual criteria evaluations) which the student achieves. quantities (instead of Good and Bad, use 10 and 0). When these Another example is a sporting event like Figure Skating performance evaluations are expressed in numbers, they can be Competition where all performers are rated in a scale of 0 to 6. conveniently compared to select the preferred candidate. The The performer who receives 5.92 wins over another whose score task of selecting the best product, a better machine, a taller build- is 5.89. How do the judges come up with these scores? People ing, a champion athlete, etc. is much simpler when there is only judging such events follow and evaluate each performer in an one objective (performance) which is measured in terms of a sin- agreed upon list of items (criteria of evaluations) such as style, gle number. Consider a product such as a 9 Volt Transistor music, height of jump, stability of landing, etc. Perhaps each Battery whose functional life expressed in terms of hours is the item is scored in a scale of 0 - 6, then the average scores of all only characteristic of concern. Given two batteries: Brand A (20 judges are averaged to come up with the final scores. hours) and Brand B (22.5 hours), it is easy to determine which one is preferable. Now suppose that you are not only concerned If academic performances and athletic abilities can be evaluated about the functional life, but also the unit costs which are: $1.25 by multiple criteria and are expressed in terms of a single num- for Brand A and $1.45 for Brand B. The decision about which ber, then why isn’t it commonly done in engineering and sci- brand of Battery is better is no longer straightforward. ence? There are no good reasons why it should not be. For a slight extra effort in data reduction, multiple criteria can be eas- Multiple performance objectives (or goals) are quite frequent in ily incorporated in most experimental data analysis schemes. the industrial arena. A rational means of combining various per- formances evaluated by different units of measurement is essential To understand the extra work necessary, let us examine how scien- tific evaluation differs from those of student achievement or from14 Second Quarter - 2004
    • The Journal of the Reliability Analysis Centeran athletic event. In academic as well as in athletic performances, Table 2. Golf and Basketball Scores of Two Playersall individual evaluations are compiled in the same way, say 0 - 4 Relative Player Player(in case of student’s grade, there are no units). They also carry the Criteria Weight 1 2 QCsame Quality Characteristic (QC) or the sense of desirability (the Golf (9 holes) 50% 42 52 ==> Smaller is betterhigher score the better) and the same Relative Weights (level of Basketball 50% 28 18 ==> Bigger is betterimportance) for all. Individual evaluations (like the grades in indi- Total Score 70 70vidual courses) can be simply added as long as their (a) units ofmeasurement, (b) sense of desirability, and (c) relative weight Observe that the total of scores for Player 1 (42 + 28) is 70 and(importance) are the same for all courses (criteria). Unfortunately, for Player 2 (52 + 18) is also 70. Are these two players of equalin most engineering and scientific evaluations, the individual crite- caliber? Are the additions of the scores meaningful and logical?ria are likely to have different units of measurement, Quality Unfortunately, the total of scores do not reflect the degree byCharacteristic, and relative weights. Therefore, methods specific which Player 1 is superior over Player 2 (score of 42 is betterto the application, and that which overcomes the difficulties posed than 52 in Golf and score of 28 is better than 18 in basketball).by differences in the criteria of evaluations, must be devised. The total scores are meaningful only when the QC’s of both cri- teria are made the same before they are added together.Units of MeasurementsUnlike GPA or Figure Skating, the criteria of evaluations in engi- One way to combine the two scores is to first change the QC ofneering and science, generally have different units of measure- the Golf score by subtracting from a fixed number, say 100, andments. For example, in an effort to select a better automobile, then add it to the Basketball score. The new total score then is:the selection criteria may consist of: fuel efficiency measured inMiles/Gallon, engine output measured in Horsepower, reliabili- Overall score for Player 1 = 30 x 0.50 + (100-45) x 0.50 = 42.5ty measured as Defects/1000, etc. When the units of measure- Overall score for Player 2 = 20 x 0.50 + (100-55) x 0.50 = 32.5ments for the criteria are different, they cannot be combined eas-ily. To better understand these difficulties, consider a situation The overall scores indicate the relative merit of the players.where we are to evaluate two industrial pumps of comparable Player 1 having the score of 42.5 is a better sportsman comparedperformances (see Table 1). Based on 60% priority on higher to Player 2 who has a score of 32.5.discharge pressure and 40% on lower operating noise, whichpump would we select? Relative Weight In formulating the GPA, grades of all courses for the student are Table 1. Performance of Two Brands of Industrial Pumps weighted the same. This approach is generally not valid in sci- Evaluation Relative entific studies. For the two Players in the earlier example, their Criteria Weight Pump A Pump B skills in Golf and Basketball were weighted equally. Thus, theDischarge Pressure 60% 160 psi 140 psi relative weight did not influence the judgment about their skillsOperating Noise 40% 90 Decibels 85 Decibels in the games. If the relative weights are not the same for all Totals: 250 (?) 225 (?) objectives, the contribution from the individual criteria of evalu- ations must be multiplied by the respective relative weights. ForPump A delivers more pressure, but is noisier. Pump B has a lit- example, if Golf had a relative weight of 40%, and Basketballtle lower pressure, but is quieter. What can we do with the eval- had 60%, the computation for the overall scores must reflect theuation numbers? Could we add them? If we were to add them influence of the relative weight as follows:what units will the resulting number have? Would the totals beof use? Is Pump A with 250 total better than Pump B? Overall score for Player 1 = 30 x 0.40 + (100-45) x 0.60 = 45 Overall score for Player 2 = 20 x 0.40 + (100-55) x 0.60 = 35Obviously, addition of numbers (evaluations) with different units ofmeasurements is not permissible. If such numbers are added, the The Relative Weight is a subjective number assigned to eachtotal serves no useful purpose, as we have no units to assign, nor do individual criteria of evaluation. Generally it is determined bywe know whether bigger or smaller value is better. If the evalua- the team during the experiment planning session and is assignedtions were to be added, they must first be made dimensionless (nor- such that the total of all weights is 100 (set arbitrarily).malized). This can be easily done by dividing all evaluations (suchas 160 psi, 140 psi) of a criteria by a fixed number (such as 200 psi), Thus, when the preceding general concerns are addressed, crite-such that the resulting number is a unitless fraction. ria of evaluations for evaluation of any product or process per- formance can be combined into a single number as demonstrat-Quality Characteristic (QC) ed in the following application example.Just because two numbers have the same or no units, they maynot necessarily be meaningfully added. Consider the following An Example Applicationtwo players’ scores (see Table 2) and attempt to determine which A group of process engineers and researchers, involved in man-player is better. ufacturing baked food products, planned an experiment to deter- (Continued on page 17) Second Quarter - 2004 15
    • The Journal of the Reliability Analysis CenterA Strategy for Simultaneous ... (Continued from page 15)mine the “best” recipe for one of their current brand of cakes. To examine how the OEC of the cake samples is formulated,Surveys showed that the “best” cake is judged on taste, moist- note that the individual sample evaluations were combined byness, and smoothness rated by customers. The traditional “appropriate normalization.” The term normalization refers toapproach has been to decide the recipe based on one criterion the act of reducing the individual evaluations into dimensionless(say taste) at a time. Experience, however, has shown that when quantities, aligning their quality characteristics to conform to athe recipe is optimized based on one criterion; subsequent analy- common direction (commonly bigger), and allowing each crite-ses using other criteria do not necessarily produce the same ria to contribute in proportion of their relative weight. The OECrecipe. When the ingredients differ, optimizing the final recipe equation appropriate for the cake baking project is:becomes a difficult task. Arbitrary or subjectively optimizedrecipes have not brought the desired customer satisfaction. The (x1 - 0)  (x 2 - 40 )   (x 3 - 2 )  OEC = x 55 + 1 -  x 20 + 1 - (8 - 2 ) group therefore decided to follow a path of consensus decision, x 25and carefully devise a scientific scheme to incorporate all crite- (12 - 0)  (40 - 25)       ria of evaluations simultaneously into the analysis process.In the planning session convened for the Cake Baking The contribution of each criteria is first turned into fractions (aExperiment, and from subsequent reviews of experimental data, dimensionless quantity) by dividing the evaluation by a fixedthe applicable Evaluation Criteria and their characteristics as number, such as the difference between best and the worstshown in Table 3 were identified. Taste, being a subjective crite- among all the respective sample evaluations (12 - 0 for Taste, seerion, was to be evaluated using a number between 0 and 12, with Table 3). The numerator represents the evaluation reduced by12 being assigned to the best tasting cake. The Moistness was to smaller magnitude of the Worst or the Best evaluations in case ofbe measured by weighing a standard size cake and by noting its bigger and smaller QC’s and by the Nominal value in case ofweight in grams. It was the consensus that a weight of about 40 Nominal QC. The contributions of the individual criteria aregrams represents the most desirable moistness and indicates that then multiplied by their respective Relative Weights (55, 20,its Quality Characteristic is “nominal.” In this evaluation, results etc.). The Relative Weights which are used as a fraction of 100,above and below the nominal are considered equally undesirable. assures the OEC values to fall within 0 - 100.Smoothness was measured by counting the number of voids in thecake, which made this a “smaller is better” (QC) evaluation. The Since Criteria 1 has the highest Relative Weight, all other crite-relative weights were assigned such that the total was 100. The ria are aligned to have a Bigger QC. In the case of a Nominalnotations X1, X2, and X3 as shown in Table 3, are used to repre- QC, as it is the case for Moistness (second term in the equationsent the evaluations of any arbitrary sample cake. above), the evaluation is first reduced to deviation from the nom- inal value (X2 - nominal value). The evaluation reduced to devi- Table 3. Evaluation Criteria for Cake Baking Experiments ation naturally turns to Smaller QC. The contributions from the Criteria Evaluation Quality Relative Smoothness and Moistness, both of which now have Smaller Description Worst Best Characteristic (QC) Weighting QC, are aligned with Bigger QC by subtracting the normalizedTaste (x1) 0 12 Bigger is better 55 fraction from 1. An example calculation of OEC using the eval-Moistness (x2) 25 40 Nominal 20 uations of cake sample #1 (see Table 4) follows.Smoothness (x3) 8 2 Smaller is better 25 Sample calculations:Two samples cakes were baked following two separate recipesunder study. The performance evaluations for the two samples Trial 1, Sample 1 (x1 = 9, x2 = 34.19, x3 = 5)are as shown in Table 4. Note that each sample is evaluated byall three criteria of evaluations (taste, moistness, and smooth- OEC = 9 x 55/12 + (1 - (40 - 34.19)/15) x 20) + (1 - (5 - 2)/6) x 25ness). The OEC for each sample is created by combined individ-ual evaluations into a single number (OEC = 66 for sample 1), = 41.25 + 12.25 + 12.5 = 66 (shown in Table 4)which represents the performance of the sample cake and can becompared for the relative merit. In this case, cake sample #1 with Similarly, the OEC for the second sample is calculated to be 64.OEC as 66 is slightly better than sample #2 with 64 as OEC. The OEC values are considered as the “Results” for the purpos- es of the analysis of the results of designed experiments. Table 4. Trial #1 Evaluations Criteria Sample #1 Sample #2 The OEC concept was first published by the author in the refer-Taste 9.00 8.00 ence text in 1989. Since then it has been successfully utilized inMoistness 34.19 33.00 numerous industrial experiments, particularly those that fol-Smoothness 5.00 4.00 lowed the Taguchi Approach of experimental designs. The OEC OEC 66.00 64.00 scheme has been found to work well for all kinds of experimen- tal studies regardless of whether it utilizes designed experiments. (Continued on page 19) Second Quarter - 2004 17
    • YOU ASKED, AND WE LISTENED PUTTING THE PIECES OF RELIABILITY, AVAILABILITY, MAINTAINABILITY, SAFETY AND QUALITY ASSURANCE TOGETHER New Fault Tree Analysis Engine!ITEM TOOLKIT MODULES Binary Decision■ MIL-217 Reliability Prediction Diagram (BDD)■ Bellcore/Telcordia Reliability Prediction AVAILABLE■ 299B Reliability Prediction NOW■ RDF Reliability Prediction■ NSWC Mechanical Reliability Prediction■ Maintainability Analysis■ Failure Mode, Effects and Criticality Analysis■ Reliability Block Diagram■ Fault Tree Analysis■ Markov Analysis■ SpareCost AnalysisITEM QA MODULES■ Design FMEA■ Process FMEA■ Control Plan■ Document Control and Audit (DCA)■ Calibration Analysis■ Concern and Corrective Action Management (CCAR)■ Statistical Process Control (SPC) Item Software (USA) Inc. Item Software (UK) Limited 2190 Towne Centre Place, Suite 314, Anaheim, CA 92806 1 Manor Court, Barnes Wallis Road, Fareham, Hampshire Tel: 714-935-2900 - Fax: 714-935-2911 PO15 15TH, U.K. URL: www.itemsoft.com Tel: +44 (0) 1489 885085 - Fax: +44 (0) 1489 885065 E-Mail: itemusa@itemsoft.com E-Mail: sales@itemuk.com Visit our Web site at www.itemsoft.com, or call us today for a free demo CD and product catalog.
    • The Journal of the Reliability Analysis CenterA Strategy for Simultaneous ... (Continued from page 17)References About the Author1. Roy, Ranjit K., A Primer on The Taguchi Method, Society of Ranjit K. Roy, Ph.D., P.E. (M.E.), Nutek, Inc. is a trainer and con- Manufacturing Engineers, P.O. Box 6028, Dearborn, sultant specializing in the Taguchi approach to quality improve- Michigan, USA 48121, ISBN: 087263468X. ment. He is the author of the Design of Experiments Using2. Roy, Ranjit, Design of Experiments Using the Taguchi Approach: Taguchi Approach: 16 Steps to Product, Process Improvement and 16 Steps to Product and Process Improvement by Hardcover A Primer on the Taguchi Method, and of Qualitek-4 software for (January 2001) John Wiley & Sons; ISBN: 0471361011. design and analysis of Taguchi experiments. He can be contacted3. Qualitek-4 Software for Automatic Design and Analysis of by E-mail at: <rkroycc@comcast.net> and additional information Taguchi Experiments, <http://rkroy.com/wp-q4w.html>. is available at <www.rkroy.com/wp-rkr.html>. Sole 2004 - “Future Logistics: The Integrated Enterprise”SOLE – The International Society of Logistics (‘SOLE’ or ‘the six major focus areas: Organization for Optimization; Defense/Society’) will hold its 39th Annual International Conference and Industry/Commercial/Academic Alliances & Integration; LifeExhibition from 29 August through 2 September 2004 in Norfolk, Cycle Systems Design; Life Cycle Systems Support; LogisticsVirginia. This year’s conference theme is “Future Logistics: The Chain Management; and Logistics Enterprise ResourceIntegrated Enterprise.” BG Scott G. West, Quartermaster General Optimization.of the United States Army and Commandant of the U.S. ArmyQuartermaster Center will serve as both the Defense Chair and the The week’s offerings include a number of pre-conference work-conference host. Joining him, as the Industry Chair, is Clayton shops on Sunday and Monday, August 29th and 30th; the sympo-(Clay) M. Jones, Chairman, President and Chief Executive Officer sium’s technical program on Tuesday through Thursday (Augustof Rockwell Collins, selected in January 2004 by Forbes maga- 31st through September 2nd); an exhibition of companies and gov-zine as the “best managed aerospace and defense company in ernment agencies, from the opening Exhibitor’s Reception onAmerica.” Senior leaders from the defense, industry, academic Monday evening, August 30th through the close of the Exhibit Halland business communities will participate throughout the confer- on Wednesday, September 1st; and the Society’s Annual Awardsence, not just in sharing their vision and experiences but also in Program and Banquet on Thursday evening, September 2nd. Ininteractive dialogue and participation with the attendees. addition, the Tuesday evening reception will be held at Nauticus (The National Maritime Center), conveniently within walking dis-Over the three days the symposium will explore the integration, tance of the Norfolk Marriott Waterside (the conference venue).expansion, and connection of the logistics enterprise both intra-logistics and inter-functional; from tactical to strategic; and from SOLE – The International Society of Logistics is a non-profit inter-present to future - all supported by the themes of logistics support national professional society composed of individuals organized torequirements driving technology development (instead of logistics enhance the art and science of logistics technology, education, andhaving to adjust to/for design efficiencies/shortcomings/inadequa- management. The Society is in no way sponsored by any group,cies that impact support delivery) and best practices/process company or other association. SOLE was founded in 1966 as theimprovements that significantly reduce the logistics ‘tail/foot- Society of Logistics Engineers “to engage in educational, scientif-print’ (i.e., make an impact on the bottom line). Plenary sessions, ic, and literary endeavors to advance the art of logistics technolo-panels, best practice paper presentations and the development of gy and management.” For more information, visit SOLE’s webwhite papers reflecting the positions of the defense, industry, aca- site at <www.sole.org/conference.asp>; or contact SOLEdemic, and commercial global attendees will address and integrate Headquarters at <solehq@erols.com> or (301) 459-8446. RAC Product NewsOne of the means by which the RAC carries out its responsibility on systems engineering and another on integrated supply chainto disseminate information on reliability, maintainability, support- management. Also, sometime this summer, a new toolkit on sup-ability, and quality (RMSQ) is by developing and selling a variety portability will be available.of products. One look at our catalog (go to <http://rac.alionscience.com/rac/jsp/webproducts/products.jsp> and click on “Download We are always trying to ferret out the most pressing needs of thethe entire catalog of RAC’s Products and Services in PDF format”) RMSQ community and to identify products to help meet thosereveals over 60 products, most of them downloadable, for the needs. Do you have ideas for a new RAC product? Let us know byRMSQ practitioner and manager. Our newest addition is taking the 30-second product survey. Simply go to <http://rac.Jackknife, a PDA reliability application for decision makers on the alionscience.com/productsurvey> and complete the on-line, shortmove. Jackknife contains 8 tools and 5 databases that support split questionnaire. If you have any questions or comments that wouldsecond reliability based decision making whether in a design prefer to share person-to-person, please contact the RAC Productreview or during concept development. By the time this issue of Development Manager, Ned H. Criscimagna by E-mail at <ncrisci-the Journal is published, our products will include two handbooks magna@alionscience.com>, or call him at (301) 918-1526. Second Quarter - 2004 19
    • The Journal of the Reliability Analysis CenterFuture Events in Reliability, Maintainability, Supportability & Quality2004 International Military & Aerospace/ Special Symposia on Contact 30th International Symposium for Testing & Avionics COTS Conference Phenomena in MEMs Failure AnalysisAugust 3-5, 2004 October 24-27, 2004 November 14-18, 2004Seattle, WA Long Beach, CA Worchester, MAContact: Edward B. Hakim Contact: Dr. Lior Kogut Contact: Matthew ThayerThe C3I Inc. University of California at Berkeley Advanced Micro Devices2412 Emerson Avenue Department of Mechanical Engineering Austin, TXSpring Lake, NJ 07762 5119 Etcheverry Hall Tel: (512) 602-5603Tel: (732) 449-4729 Berkeley, CA 94720-1740 E-mail: <matthew.thayer@amd.com>Fax: (775) 655-0897 Tel: (510) 642-3270 On the Web: <http://www.edfas.org/istfa>E-mail: <e.hakim@att.net> Fax: (510) 643-5599On the Web: <http://nppp.jpl.nasa.gov/docs/ E-mail: <kogut@newton.berkeley.edu> CMMI Technology Conference cots2004_cfp.pdf> On the Web: <http://www.asmeconferences. November 15-18, 2004 org/IJTC04/> (click on Special Symposia Denver, CO22nd International System Safety on Contact Mechanics) Contact: Dania Khan Conference NDIAAugust 2-6, 2004 7th Annual Systems Engineering Conference 2111 Wilson Blvd., Suite 400Providence, RI October 25-28, 2004 Arlington, VA 22201-3061Contact: Dave O’Keeffe Dallas, TX Tel: (703) 247-2587Raytheon Contact: Dania Khan Fax: (703)522-1885E-mail: <David_E_Okeeffe@raytheon.com> NDIA E-mail: <dkhan@ndia.org>On the Web: <http://www.system-safety. 2111 Wilson Blvd., Suite 400 On the Web: <http://www.sei.cmu.edu/ org/~22ndissc/> Arlington, VA 22201 cmmi/events/commi-techconf.html> Tel: (703) 247-2587SOLE 2004 39th Annual International Fax: (703) 522-1885 19th International Maintenance Conference Logistics Conference and Exhibition E-mail: <dkhan@ndia.org> December 5-8, 2004August 29 - September 2, 2004 On the Web: <http://register.ndia.org/ Naples Coast, FLNorfolk, VA interview/register.ndia?PID=Brochure& Contact: Reliabilityweb.comContact: Sarah R. James, Executive Director SID=_1430NWW> P.O. Box 07070SOLE - The International Society of Fort Myers, FL 33919 Logistics DoD Maintenance Symposium & Exhibition Tel: (239) 985-03178100 Professional Place, Suite 111 October 25-28, 2004 Fax: (309) 423-7234Hyattsville, MD 20785 Houston, TX E-mail: <info@reliabilityweb.com>Tel: (301) 459-8446 Contact: Customer Service On the Web: <http://www.Fax: (301) 459-1522 SAE World Headquarters maintenanceconference.com/>E-mail: <solehq@erols.com> 400 Commonwealth DriveOn the Web: <http://www.sole.org/ Warrendale, PA 15096-0001 Aging Aircraft 2005 conference.asp> Tel: (877) 606-7323 January 31 - February 3, 2005 Fax: (724) 776-0790 Palm Springs, CAASTR 2003: Workshop on Accelerated E-mail: <CustomerService@sae.org> Contact: Ric Loeslein Stress Testing & Reliability On the Web: <http://www.sae.org/ NAVAIR, Aging Aircraft ProgramOctober 1-3, 2004 calendar/dod/> Patuxent River, MD 20670-1161Seattle, WA Tel: (301) 342-2179Contact: Mark Gibbel World Aviation Congress Fax: (301) 342-2248NASA/JPL November 2-4, 2004 E-mail: <george.loeslein@navy.mil>Tel: (818) 542-6979 Reno, NV On the Web: <http://www.agingaircraft.Fax: N/A Contact: Chris Durante utcdayton.com/>E-mail: <gibbel@cox.net SAEOn the Web: <http://www.ewh.ieee.org/soc/ SAE World Headquarters 400 Commonwealth Drive The appearance of cpmt/tc7/ast2003/> Warrendale, PA 15096-0001 advertising in this publi- Tel: (520) 621-6120 cation does not consti- Fax: (520) 621-8191 tute endorsement by the Also visit our Calendar web page at E-mail: <cdurante@sae.org> Department of Defense <http://rac.alionscience.com/calendar> On the Web: <http://www.sae.org/events/ or RAC of the products wac/> or services advertised. Second Quarter - 2004 21
    • The Journal of the Reliability Analysis Center From the Editor Predictions Remain a Controversial Issue 5. The results of any method used to make an RP must be Given recent E-mails and discussions in our reliability courses, tempered by an understanding of the method itself, the the subject of reliability prediction clearly remains a controver- maturity of the design, and the fidelity of the data. sial, and, all-too-often, emotional subject. Although “a rose by These fundamentals are not new; to old-time reliability engi- any other name …,” I often wonder if the controversy would be neers, they are common-sense and as intense if we used “assessment” rather than “prediction.” familiar to the point of being second- Alas, I doubt that we can ever escape our past, and prediction is nature. I am unsure if that same level of likely to remain in the reliability vocabulary. understanding is enjoyed by others in the acquisition and logistics communities. Even if we were to use another term, however, the controversy The arguments I hear against one might remain unless and until some fundamentals become, well, method or another, and the search for fundamental! What are these fundamentals? that one magic method lead me to sur- mise that this understanding of predic- 1. Reliability prediction (RP) is any method used to assess tion and its uses remains elusive. the level of reliability that is potentially achievable, or being achieved, at a point in time. I do not claim to have a monopoly on 2. RP is a process, not a one-time activity. It begins in early understanding predictions. I certainly development and continues throughout the life of the sys- Ned H. Criscimagna do not have all, or even most, of the tem, with different methods used at varying times. answers to making good predictions. As an engineer, however, I 3. No one method of RP is right for every item at all times. do know that we must have some way to quantify reliability as The “right” method is the one for which the requisite data we move through the acquisition process. Furthermore, as a one- are available, and that is appropriate for the intended use time aircraft maintenance officer and acquisition logistics offi- of the RP (e.g., comparison, sparing, contractual verifica- cer, I know that we must have estimates of the number of failures tion, part characterization, system field performance, etc.) expected for the equipment in our systems to determine the spar- 4. An RP stated at a confidence level is more meaningful ing and other logistics resources needed to support our systems. than a point estimate. In that light, I think our time is better spent on using the “right” prediction method correctly (see item 3) rather than on arguing that one method is “better” than another. RMSQ Headlines Learning from Columbia, QUALITY PROGRESS, published by logistics, expeditionary means being able to “open up the theater American Society for Quality, March 2004, page 38. A year after and set up a sustainment base” quickly. He says that the inability the space shuttle Columbia disintegrated on reentry into the to do that is a fundamental shortfall the Army must solve. earth’s atmosphere, killing all seven astronauts aboard, lessons learned are emerging from the investigation into the disaster. Case Study: Designing for Quality, QUALITY DIGEST, published Rather than attempting to cast blame, the investigation is seeking by QCI International, April 2004, page 29. The article presents a to improve the processes used in the shuttle program, to prepare case study of how one company reduced cycle time and for and carry out shuttle missions. improved first-time yield by implementing standardized product qualification processes in collaboration with suppliers. Virtual Maintenance, NATIONAL DEFENSE, published by National Defense Industries Association, March 2004, page 21. Commanders Ponder How Best to Mend Battlefield Logistics, As part of its continuing effort to reduce cost of ownership and NATIONAL DEFENSE, published by National Defense Industries increase availability, the US Navy is looking to “net-centric” Association, May 2004, page 12. Various reports and studies maintenance, also called distance report. By automating mainte- have documented shortcomings in the logistics systems used to nance and repair tasks, the Navy hopes to reduce Operating & support military operations in Iraq. DoD organizations are work- Support costs by 60 percent. ing to address the problems. Changes on the Way for Army Logistics Ops, NATIONAL DEFENSE, Weapon System Evaluators Must Change, or Risk Irrelevance, published by National Defense Industries Association, April 2004, Warns Christie, NATIONAL DEFENSE, published by National page 24. To realize its goal of becoming more of an “expedi- Defense Industries Association, May 2004, page 22. Operational tionary” force, the US Army is considering making sweeping testing is facing challenges in several areas. The Director of changes in logistics and support operations. According to Lt. OT&E discusses the need for change. General Claude V. Christianson, Army Deputy Chief of Staff for22 Second Quarter - 2004
    • The Journal of the Reliability Analysis CenterProduct Assurance ... (Continued from page 7)repairman who has the necessary tools, equipment and spare 8. NASA PLLS Database – Lesson 0837: False Alarmparts, etc. Restoration Time is denoted as the active repair time. Mitigation Techniques. 9. NASA PLLS Database – Lesson 0841: AvailabilityFDC is the ratio of “BIT Detectable System Failure Rate” and Prediction and Analysis.the “Total System Failure Rate”.FAP is the ratio of the “BIT False Alarm Rate” and the “Total About the AuthorSystem Failure Rate” excluding “Failure Rate of BIT Circuitry” Ananda Perera has 25 years of North American experience in Reliability/Maintainability/Safety Engineering. He is presently employed at Honeywell Engines Systems & Services, Ontario,For Further Study Canada as a Reliability/Maintainability Engineer for 21 years.1. Def Stan 00-40 (Part 4) “Reliability and Maintainability Part 4: Guidance for writing NATO R&M Requirements Mr. Perera has a Bachelor of Science in Production Engineering Documents” (Issue 2 Publication Date 13 June 2003). (1972) from the University of Aston, Birmingham, England. He2. Def Stan 00-41 “Reliability and Maintainability Mod Guide is a Professional Engineer (1976 to present) and a member of the to Practices and Procedures (Issue 3 Publication Date 25 Association of Professional Engineers of Ontario. He is also a June 1993). Certified Reliability Engineer (American Society for Quality)3. SSP 50182 “NASA/ASI Bilateral Safety and Product (1983 to present). He is Honeywell Six Sigma Plus Green Belt Assurance Requirements” (Publication Date 2 May 1996). Certified (2001) and Design For Six Sigma Certified (2003).4. ECSS-Q-00A “Space Product Assurance: Policy and Principles” (Publication Date 19 April 1996). His published papers are: Adaptive Environmental Stress5. NASA PLLS Database – Lesson 0827: Quantitative Screening • Reliability of Mechanical Parts • Optimum Cost Reliability Requirements Used as Performance-Based Maintenance. Requirements for Space Systems.6. NASA PLLS Database – Lesson 0831: Maintainability Program Management Considerations.7. NASA PLLS Database – Lesson 0835: Benefits of Implementing Maintainability on NASA Programs. Upcoming November TrainingElectronic Design Reliability Weibull AnalysisThis intensive course is structured for all key participants in the This three-day hands-on workshop starts with an overview ofreliability engineering process. Included are systems and circuit best practice Weibull analysis techniques plus a quick illustrativedesign engineers, quality engineers and members of related dis- video of three case studies. The entire New Weibull Handbook©ciplines having little or no previous reliability training. The by Dr. Abernethy, the workbook provided for the class, is cov-course deals with both theoretical and practical applications of ered beginning with how to make a Weibull plot, plus interpreta-reliability; all considerations related to the design process includ- tion guidelines for “good” Weibulls and “bad” Weibulls.ing parts selection and control, circuit analysis, reliability analy- Included are failure prediction with or without renewals, testsis, reliability test and evaluation, equipment production and planning, regression plus maximum likelihood solutions such asusage, reliability-oriented trade-offs, and reliability improve- WeiBayes, and confidence calculations. All students will receivement techniques. WinSMITH™ and VisualSMITH™ Weibull software and will get experience using the software on case study problems fromReliability Engineering Statistics industry. Computers are provided for the class. Related tech-The Reliability Statistics Training Course is a three-day, applica- niques Duane/AMSAA Reliability Growth, Log-Normal,tions-oriented course on statistical methods. Designed for the Kaplan-Meier and others will be covered. This class will preparepractitioner, this course covers the main statistical methods used the novice or update the veteran analyst to perform the latestin reliability and life data analysis. The course starts with an probability plotting methods such as warranty data analysis. Itoverview of the main results of probability and reliability theory. is produced and presented by the world-recognized leaders inThen, the main discrete and continuous distributions used in reli- Weibull research.ability data analysis are overviewed. This review of reliabilityprinciples prepares the participants to address the main problems For more information <http://rac.alionscience.com/training>.of estimating, testing and modeling system reliability data. Date: November 2-4, 2004Course materials include the course manual and RAC’s publica- Location: Orlando, FLtion “Practical Statistical Tools for the Reliability Engineer.” Second Quarter - 2004 23
    • Reliability Analysis Center The Journal of the PRSRT STD201 Mill Street US Postage PaidRome, NY 13440-6916 Utica, NY Permit #566 Reliability Analysis Center(315) 337-0900 General Information(888) RAC-USER General Information(315) 337-9932 Facsimile(315) 337-9933 Technical Inquiriesrac@alionscience.com via E-mailhttp://rac.alionscience.com Visit RAC on the Web