This document summarizes a presentation given by Fred Schenkelberg at the Applied Reliability Symposium in San Diego, California in 2007. The presentation discusses how MTBF (Mean Time Between Failure) is a poor metric for reliability and promotes using other metrics like MTTF (Mean Time To Failure) instead. It covers how MTBF is calculated, issues with only looking at the mean time, and how failure distributions and models and other factors like cost should be considered for a better understanding of reliability.
Scientific and Grid Workflow Management (SGS09)Cesare Pautasso
This document provides an introduction to scientific and grid workflows. It discusses how workflow management systems coordinate multiple distributed computational jobs on grid resources. These systems feature visual programming environments that allow scientists to model workflows as networks of analytical steps involving tasks like database access, data analysis, and computationally intensive jobs submitted to clusters or grids. The document then surveys selected workflow management tools and outlines current research trends in scientific and grid workflows.
MTBF is a common metric among practitioners and users of reliability prediction, safety assurance, and maintenance planning. However, there are a number of significant flaws and limitations with this approach. This presentation goes through those limitations and uses that information to suggest alternatives that may provide much greater insight into product performance.
امروزه ویدئو دیتا پروژکتور در موقعیتهای مختلفی ، به کمک کاربران شتافته و با بالا بردن بهره وری آموزش ، جلسات و سمینارها نقش بسزایی در ارتقاء کیفی اینگونه گردهمایی ها داشته است .ذیلا به پاره ای از مصارف دیتاپروژکتور اشاره می شود :
۱-کلاس های درس از دبستان تا دانشگاه (همیشه یک تصویر گویاتر و موثرتر از هزاران واژه و کلمه میباشد . بدیهی است که آموزش برپایه تصویر می تواند حتی در مقاطع پایین نظام آموزشی بسیار موثر واقع شود)
۲-آموزشگاههای خصوصی و نیمه خصوصی
۳- اتاق جلسات و کنفرانس مدیران (که در آن انواع جلسات دمو و پرزنت انجام می گیرد)
۴- نمایشگاهها و شوروم های شرکتهای خصوصی و صنعتی (به جهت پخش فایلهای تبلیغاتی در ابعاد بزرگ)
۵-بکارگیری از دیتا پروژکتور در سالنهای همایش و آمفی تاتر
۶- سینما ها
۷-مدیران و کارشناسان شرکت های مهندسی مشاور
۸-استفاده از ویدیو دیتا پروژکتور در سینمای خانگی
گستردگی جغرافیایی کشورها از یکسو، کمبود نیروی انسانی متخصص در علوم مختلف و افزایش هزینههای کاری از سوی دیگر، منجر به عدم دسترسی سازمانها و شرکتها به همة منابع مورد نیاز شده است.
ویدئوکنفرانس یک فناوری منحصر به فرد است که برقراری ارتباط صوتی و تصویری (به صورت زنده) افراد را در مکانهای مختلف با فواصل مختلف امکانپذیر مینماید.
هزینههای سرسامآور جابجایی اساتید، متخصصین و مدیران مجموعهها برای برگزاری نشستهای گوناگون به صورت هزینههای آشکار و نیز از دستدادن بخش قابل توجهی از زمان، نیرو و بازده کاری و فکری این افراد، به عنوان هزینههای پنهان، نیاز بسیاری را برای به کارگیری از فناوریهای مدرن ارتباطی به خصوص ویدئوکنفرانس ایجاد کرده است.
در کنار امکانات ارتباطی ویدئوکنفرانس، با بهرهگیری از این سیستم میتوانید در یک زمان واحد در چندین مکان حضور داشته باشید. امکانی که تنها با استفاده از این تکنولوژی میسر خواهد بود.
The document contains KPI data for various sites including E-MTBF (Electrical Mean Time Between Failure), MTBBF (Mean Time Between Board Failure), and uptime for Fusion and Synchro equipment. Charts are presented showing the data for each metric across sites. The PSK site combines data for Synchro AC and CPG equipment. Overall equipment reliability as measured by MTBF, MTBBF, and uptime is reported.
The document discusses design for reliability (DFR) topics including the need for DFR, the DFR process, terminology, Weibull plotting, system reliability, DFR testing, and accelerated testing. It provides details on the DFR process, common reliability terminology such as reliability, failure rate, mean time to failure, and the bathtub curve. It also explains the exponential distribution and Weibull plotting, which are important reliability analysis tools.
DISCUS DFM focuses on characteristic management at an earlier stage in the product lifecycle when a manufacturing engineer is analyzing the detailed design of the part. In fact, by helping to define the applicable specs and annotations to include on the design, DISCUS DFM can actually assist with the definition of the Technical Data Package (TDP).
DISCUS DFM picks up where today’s leading CAD tools leave off by empowering the product team to consider the key considerations for manufacturing the part. An overview of the flow:
You start DISCUS by opening the native 3D CAD model in the model/drawing panel.
DISCUS will automatically review the model and its associated PMI and add the balloons to the model and the rows in the Bill of Characteristics.
You select the appropriate part family and likely list of manufacturing processes to consider for fabricating the part.
At this point, DISCUS DFM enables you to evaluate the part DFM by applying rules associated with the part’s features and characteristics versus the likely manufacturing processes.
The evaluation of the part against the integrated manufacturing knowledgebase results in a list of pertinent DFM constraints, recommended annotations/PMI for the part, and more.
When you're completed the analysis of the model, you can export the DFM data for review with the DFM engineer or the entire Integrated Product Team.
With DISCUS DFM, you consistently and correctly add the vital details to the design, giving you the ability to manufacture the new part right the first time. DISCUS DFM is the tool to improve the quality and productivity of your engineers.
Design for reliability (DFR) is an industry-wide practice and a philosophy of considering reliability in an early stage of product design and development, to achieve a highly-reliable product while with sustainable cost. Physical of Failure (PoF) is recognized as a key approach of implementing DFR in a product design and development process. The author will present a case study to illustrate predicting and identifying product failure early in the design phase with the help of a quantitative PoF model based analysis tool.
Scientific and Grid Workflow Management (SGS09)Cesare Pautasso
This document provides an introduction to scientific and grid workflows. It discusses how workflow management systems coordinate multiple distributed computational jobs on grid resources. These systems feature visual programming environments that allow scientists to model workflows as networks of analytical steps involving tasks like database access, data analysis, and computationally intensive jobs submitted to clusters or grids. The document then surveys selected workflow management tools and outlines current research trends in scientific and grid workflows.
MTBF is a common metric among practitioners and users of reliability prediction, safety assurance, and maintenance planning. However, there are a number of significant flaws and limitations with this approach. This presentation goes through those limitations and uses that information to suggest alternatives that may provide much greater insight into product performance.
امروزه ویدئو دیتا پروژکتور در موقعیتهای مختلفی ، به کمک کاربران شتافته و با بالا بردن بهره وری آموزش ، جلسات و سمینارها نقش بسزایی در ارتقاء کیفی اینگونه گردهمایی ها داشته است .ذیلا به پاره ای از مصارف دیتاپروژکتور اشاره می شود :
۱-کلاس های درس از دبستان تا دانشگاه (همیشه یک تصویر گویاتر و موثرتر از هزاران واژه و کلمه میباشد . بدیهی است که آموزش برپایه تصویر می تواند حتی در مقاطع پایین نظام آموزشی بسیار موثر واقع شود)
۲-آموزشگاههای خصوصی و نیمه خصوصی
۳- اتاق جلسات و کنفرانس مدیران (که در آن انواع جلسات دمو و پرزنت انجام می گیرد)
۴- نمایشگاهها و شوروم های شرکتهای خصوصی و صنعتی (به جهت پخش فایلهای تبلیغاتی در ابعاد بزرگ)
۵-بکارگیری از دیتا پروژکتور در سالنهای همایش و آمفی تاتر
۶- سینما ها
۷-مدیران و کارشناسان شرکت های مهندسی مشاور
۸-استفاده از ویدیو دیتا پروژکتور در سینمای خانگی
گستردگی جغرافیایی کشورها از یکسو، کمبود نیروی انسانی متخصص در علوم مختلف و افزایش هزینههای کاری از سوی دیگر، منجر به عدم دسترسی سازمانها و شرکتها به همة منابع مورد نیاز شده است.
ویدئوکنفرانس یک فناوری منحصر به فرد است که برقراری ارتباط صوتی و تصویری (به صورت زنده) افراد را در مکانهای مختلف با فواصل مختلف امکانپذیر مینماید.
هزینههای سرسامآور جابجایی اساتید، متخصصین و مدیران مجموعهها برای برگزاری نشستهای گوناگون به صورت هزینههای آشکار و نیز از دستدادن بخش قابل توجهی از زمان، نیرو و بازده کاری و فکری این افراد، به عنوان هزینههای پنهان، نیاز بسیاری را برای به کارگیری از فناوریهای مدرن ارتباطی به خصوص ویدئوکنفرانس ایجاد کرده است.
در کنار امکانات ارتباطی ویدئوکنفرانس، با بهرهگیری از این سیستم میتوانید در یک زمان واحد در چندین مکان حضور داشته باشید. امکانی که تنها با استفاده از این تکنولوژی میسر خواهد بود.
The document contains KPI data for various sites including E-MTBF (Electrical Mean Time Between Failure), MTBBF (Mean Time Between Board Failure), and uptime for Fusion and Synchro equipment. Charts are presented showing the data for each metric across sites. The PSK site combines data for Synchro AC and CPG equipment. Overall equipment reliability as measured by MTBF, MTBBF, and uptime is reported.
The document discusses design for reliability (DFR) topics including the need for DFR, the DFR process, terminology, Weibull plotting, system reliability, DFR testing, and accelerated testing. It provides details on the DFR process, common reliability terminology such as reliability, failure rate, mean time to failure, and the bathtub curve. It also explains the exponential distribution and Weibull plotting, which are important reliability analysis tools.
DISCUS DFM focuses on characteristic management at an earlier stage in the product lifecycle when a manufacturing engineer is analyzing the detailed design of the part. In fact, by helping to define the applicable specs and annotations to include on the design, DISCUS DFM can actually assist with the definition of the Technical Data Package (TDP).
DISCUS DFM picks up where today’s leading CAD tools leave off by empowering the product team to consider the key considerations for manufacturing the part. An overview of the flow:
You start DISCUS by opening the native 3D CAD model in the model/drawing panel.
DISCUS will automatically review the model and its associated PMI and add the balloons to the model and the rows in the Bill of Characteristics.
You select the appropriate part family and likely list of manufacturing processes to consider for fabricating the part.
At this point, DISCUS DFM enables you to evaluate the part DFM by applying rules associated with the part’s features and characteristics versus the likely manufacturing processes.
The evaluation of the part against the integrated manufacturing knowledgebase results in a list of pertinent DFM constraints, recommended annotations/PMI for the part, and more.
When you're completed the analysis of the model, you can export the DFM data for review with the DFM engineer or the entire Integrated Product Team.
With DISCUS DFM, you consistently and correctly add the vital details to the design, giving you the ability to manufacture the new part right the first time. DISCUS DFM is the tool to improve the quality and productivity of your engineers.
Design for reliability (DFR) is an industry-wide practice and a philosophy of considering reliability in an early stage of product design and development, to achieve a highly-reliable product while with sustainable cost. Physical of Failure (PoF) is recognized as a key approach of implementing DFR in a product design and development process. The author will present a case study to illustrate predicting and identifying product failure early in the design phase with the help of a quantitative PoF model based analysis tool.
This PPT is a preview on my recent DFM Handbook-“ Taoist Directions for Design & Development “- targeted to Design Engineering Professionals ,Industries and Institutions .I am offering FREE on-line Consultancy on my ‘Tao of DFM’ .For on-line consultancy as well as detailed implementations please email to erramalingam.ks@gmail.com
Please visit www.dfmablog.com and www.dfmhandbook.com
Er Ramalingam DFM & Innovation Consultant
Chennai -90 INDIA
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 2. Reliability Calculations
1.Use of failure data
2.Density functions
3.Reliability function
4.Hazard and failure rates
FRACAS: A method of analyzing the failure codes assigned to the individual work orders and identifying common themes and trends. The root cause of the high impact items are determined, with a corrective action identified and executed to prevent reoccurrence of the issue.
This document discusses maintenance and methods for tracking losses and overcoming losses through proper maintenance activities. It defines maintenance as actions to retain or restore equipment to its maximum useful life. The three main types of maintenance are preventive, breakdown, and corrective. Preventive maintenance includes periodic and predictive maintenance. Periodic maintenance involves spare part replacement on a predefined schedule, while predictive maintenance uses equipment like bearing meters to determine condition-based maintenance. Metrics like mean time between failure (MTBF) and mean time to repair (MTTR) are discussed to measure equipment reliability and maintainability. Uptime is also defined as a percentage measure of up-time without downtime.
The document discusses MTBF (mean time between failures), including how to calculate, predict, and test it. It addresses common misconceptions about MTBF and describes a two-day training plan that covers the basics of MTBF as well as how to analyze MTBF reports and predictions. The training provides answers to questions and considers reliability modeling techniques to estimate component and system-level MTBF.
MTBF / MTTR - Energized Work TekTalk, Mar 2012Energized Work
The document discusses metrics for measuring system availability and reliability such as MTBF, MTTR, and various "nines" of availability. It notes that while increasing availability is important, reducing recovery time after failures through measures like redundant systems, data replication, and disaster recovery plans is also crucial. The key metrics are recovery time objective (RTO) and recovery point objective (RPO) which specify how long a system can be down and how much data can be lost respectively. The document concludes that the right approach depends on each system's specific requirements and that failures will inevitably occur, so the focus should be on rapid recovery.
The document discusses key principles of design for manufacturing (DFM) including minimizing part count, using standard components and materials, designing for tolerances, collaborating with manufacturing, and understanding production processes and costs. It emphasizes reducing costs at each stage of production from components to assembly to overhead. Designs should be optimized through an iterative process of cost analysis and redesign while considering production volumes and other factors.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 3. Failure Time Distributions
1.Constant failure rate distributions
2.Increasing failure rate distributions
3.Decreasing failure rate distributions
4.Weibull Analysis – Why use Weibull?
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 1. Reliability Definitions
1.Reliability---Time dependent characteristic
2.Failure rate
3.Mean Time to Failure
4.Availability
5.Mean residual life
This seminar session provides an overview of major aspects of reliability engineering, including general introduction of reliability engineering (definition of reliability, function of reliability engineering, a brief history of reliability, etc.), reliability basics (metrics used in reliability, commonly-used probability distributions in reliability, bathtub curve, reliability demonstration test planning, confidence intervals, Bayesian statistics application in reliability, strength-stress interference theory, etc.), accelerated life testing (ALT) (types of ALT, Arrhenius model, inverse power law model, Eyring model, temperature-humidity model, etc.), reliability growth (reliability-based growth models, MTBF-based growth model, etc.), systems reliability & availability (reliability block diagram, non-repairable or repairable systems, reliability modeling of series systems, parallel systems, standby systems, and complex systems, load sharing reliability, reliability allocation, system availability, Monte Carlo simulation, etc.), and degradation-based reliability (introduction of degradation-based reliability, difference between traditional reliability and degradation-based reliability, etc.).
The document discusses potential issues with using MTBF/MTTF as the primary reliability metric for the defense and aerospace industries. It argues that MTBF/MTTF provides an incomplete view of reliability across the entire product lifecycle and can result in overly optimistic assessments. The document proposes using an alternative metric called Bx/Lx, which specifies the life point where no more than a certain percentage (like 10%) of failures have occurred. This provides a more comprehensive view of reliability focused on early failures. Overall, the document advocates updating reliability metrics and practices to better reflect physical failure mechanisms.
Fault tolerance refers to a system's ability to continue operating correctly even if some components fail. There are three categories of faults: transient, intermittent, and permanent. Fault tolerance is achieved through redundancy, including information, time, and physical redundancy. Reliability is the probability a system will function as intended for a given time. It depends on design, components, and environment. Reliability increases through quality control and redundancy. Maintainability is the probability a failed system can be repaired within a time limit. Availability is the probability a system will be operational when needed. Series systems fail if any component fails, while parallel systems fail only if all components fail.
Design for Assembly (DFA) is a vital component of concurrent engineering – the multidisciplinary approach to product development. You might think it strange to begin by thinking about the assembly before you have designed all the components, but you can often eliminate many parts at the conceptual stage, and save yourself a lot of trouble.
This slideshow provides an introduction to the rules that are used in industry to produce affordable, reliable products. It includes the in-depth analysis of two real-world products subjected to a "product autopsy", detailed in photographs, plus tutor notes and recommendations for additional activities including an assembly game.
+++
Thanks for all the interest shown in this presentation... visit Capacify and leave me a message if you have any questions or comments. Also let me know if you'd like to have me as a guest speaker: the in-class 'ease of assembly game' is always fun.
Purpose Statement:
To provide an overview of Design for Manufacturing and Assembly (DFMA) techniques, which are used to minimize product cost through design and process improvements.
1. DFM decisions affect all aspects of the design from conception through production.
2. It is important to consider costs, production methods, time, material availability, and how design impacts performance.
3. Ways to reduce costs include using cheaper materials, less material, standardizing components, and designing for economies of scale in production.
This document provides an overview of a Design for Reliability (DFR) seminar presented by Mike Silverman of Ops A La Carte LLC. The seminar covers DFR concepts and tools over two days, with sessions on topics like planning for reliability, failure mode analysis, accelerated testing techniques, and root cause analysis. The document includes biographical information about Mike Silverman, the seminar schedule and objectives, an overview of the consulting company Ops A La Carte, and a high-level discussion distinguishing DFR from a "toolbox" approach and outlining the key activities in a structured DFR process.
Authors: (i) Prashanth Lakshmi Narasimhan,
(ii) Mukesh Ravichandran
Industry: Automobile -Auto Ancillary Equipment ( Turbocharger)
This was presented after the completion of our 2 months internship at Turbo Energy Limited during our 3rd Year Summer holidays (2013)
The document outlines topics covered in "The Impala Cookbook" published by Cloudera. It discusses physical and schema design best practices for Impala, including recommendations for data types, partition design, file formats, and block size. It also covers estimating and managing Impala's memory usage, and how to identify the cause when queries exceed memory limits.
Nancy Regan was a first lady who served from 1981 to 1989 during her husband Ronald Reagan's presidency. She advocated for raising awareness of drug and alcohol abuse and launched the "Just Say No" anti-drug campaign. The document is a biography of Nancy Regan's life and accomplishments as first lady over the course of 9 pages.
This document discusses the importance of equipment experts in applying Reliability Centered Maintenance (RCM) when historical failure data is lacking. It states that while historical data is useful for identifying problem areas, it does not provide all necessary failure information. Equipment experts possess valuable knowledge about potential failures, failures prevented by current maintenance, and how failures impact equipment. An RCM analysis should utilize a facilitated working group approach to capture the expertise and experience of equipment experts to develop an effective maintenance plan.
This PPT is a preview on my recent DFM Handbook-“ Taoist Directions for Design & Development “- targeted to Design Engineering Professionals ,Industries and Institutions .I am offering FREE on-line Consultancy on my ‘Tao of DFM’ .For on-line consultancy as well as detailed implementations please email to erramalingam.ks@gmail.com
Please visit www.dfmablog.com and www.dfmhandbook.com
Er Ramalingam DFM & Innovation Consultant
Chennai -90 INDIA
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 2. Reliability Calculations
1.Use of failure data
2.Density functions
3.Reliability function
4.Hazard and failure rates
FRACAS: A method of analyzing the failure codes assigned to the individual work orders and identifying common themes and trends. The root cause of the high impact items are determined, with a corrective action identified and executed to prevent reoccurrence of the issue.
This document discusses maintenance and methods for tracking losses and overcoming losses through proper maintenance activities. It defines maintenance as actions to retain or restore equipment to its maximum useful life. The three main types of maintenance are preventive, breakdown, and corrective. Preventive maintenance includes periodic and predictive maintenance. Periodic maintenance involves spare part replacement on a predefined schedule, while predictive maintenance uses equipment like bearing meters to determine condition-based maintenance. Metrics like mean time between failure (MTBF) and mean time to repair (MTTR) are discussed to measure equipment reliability and maintainability. Uptime is also defined as a percentage measure of up-time without downtime.
The document discusses MTBF (mean time between failures), including how to calculate, predict, and test it. It addresses common misconceptions about MTBF and describes a two-day training plan that covers the basics of MTBF as well as how to analyze MTBF reports and predictions. The training provides answers to questions and considers reliability modeling techniques to estimate component and system-level MTBF.
MTBF / MTTR - Energized Work TekTalk, Mar 2012Energized Work
The document discusses metrics for measuring system availability and reliability such as MTBF, MTTR, and various "nines" of availability. It notes that while increasing availability is important, reducing recovery time after failures through measures like redundant systems, data replication, and disaster recovery plans is also crucial. The key metrics are recovery time objective (RTO) and recovery point objective (RPO) which specify how long a system can be down and how much data can be lost respectively. The document concludes that the right approach depends on each system's specific requirements and that failures will inevitably occur, so the focus should be on rapid recovery.
The document discusses key principles of design for manufacturing (DFM) including minimizing part count, using standard components and materials, designing for tolerances, collaborating with manufacturing, and understanding production processes and costs. It emphasizes reducing costs at each stage of production from components to assembly to overhead. Designs should be optimized through an iterative process of cost analysis and redesign while considering production volumes and other factors.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 3. Failure Time Distributions
1.Constant failure rate distributions
2.Increasing failure rate distributions
3.Decreasing failure rate distributions
4.Weibull Analysis – Why use Weibull?
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 1. Reliability Definitions
1.Reliability---Time dependent characteristic
2.Failure rate
3.Mean Time to Failure
4.Availability
5.Mean residual life
This seminar session provides an overview of major aspects of reliability engineering, including general introduction of reliability engineering (definition of reliability, function of reliability engineering, a brief history of reliability, etc.), reliability basics (metrics used in reliability, commonly-used probability distributions in reliability, bathtub curve, reliability demonstration test planning, confidence intervals, Bayesian statistics application in reliability, strength-stress interference theory, etc.), accelerated life testing (ALT) (types of ALT, Arrhenius model, inverse power law model, Eyring model, temperature-humidity model, etc.), reliability growth (reliability-based growth models, MTBF-based growth model, etc.), systems reliability & availability (reliability block diagram, non-repairable or repairable systems, reliability modeling of series systems, parallel systems, standby systems, and complex systems, load sharing reliability, reliability allocation, system availability, Monte Carlo simulation, etc.), and degradation-based reliability (introduction of degradation-based reliability, difference between traditional reliability and degradation-based reliability, etc.).
The document discusses potential issues with using MTBF/MTTF as the primary reliability metric for the defense and aerospace industries. It argues that MTBF/MTTF provides an incomplete view of reliability across the entire product lifecycle and can result in overly optimistic assessments. The document proposes using an alternative metric called Bx/Lx, which specifies the life point where no more than a certain percentage (like 10%) of failures have occurred. This provides a more comprehensive view of reliability focused on early failures. Overall, the document advocates updating reliability metrics and practices to better reflect physical failure mechanisms.
Fault tolerance refers to a system's ability to continue operating correctly even if some components fail. There are three categories of faults: transient, intermittent, and permanent. Fault tolerance is achieved through redundancy, including information, time, and physical redundancy. Reliability is the probability a system will function as intended for a given time. It depends on design, components, and environment. Reliability increases through quality control and redundancy. Maintainability is the probability a failed system can be repaired within a time limit. Availability is the probability a system will be operational when needed. Series systems fail if any component fails, while parallel systems fail only if all components fail.
Design for Assembly (DFA) is a vital component of concurrent engineering – the multidisciplinary approach to product development. You might think it strange to begin by thinking about the assembly before you have designed all the components, but you can often eliminate many parts at the conceptual stage, and save yourself a lot of trouble.
This slideshow provides an introduction to the rules that are used in industry to produce affordable, reliable products. It includes the in-depth analysis of two real-world products subjected to a "product autopsy", detailed in photographs, plus tutor notes and recommendations for additional activities including an assembly game.
+++
Thanks for all the interest shown in this presentation... visit Capacify and leave me a message if you have any questions or comments. Also let me know if you'd like to have me as a guest speaker: the in-class 'ease of assembly game' is always fun.
Purpose Statement:
To provide an overview of Design for Manufacturing and Assembly (DFMA) techniques, which are used to minimize product cost through design and process improvements.
1. DFM decisions affect all aspects of the design from conception through production.
2. It is important to consider costs, production methods, time, material availability, and how design impacts performance.
3. Ways to reduce costs include using cheaper materials, less material, standardizing components, and designing for economies of scale in production.
This document provides an overview of a Design for Reliability (DFR) seminar presented by Mike Silverman of Ops A La Carte LLC. The seminar covers DFR concepts and tools over two days, with sessions on topics like planning for reliability, failure mode analysis, accelerated testing techniques, and root cause analysis. The document includes biographical information about Mike Silverman, the seminar schedule and objectives, an overview of the consulting company Ops A La Carte, and a high-level discussion distinguishing DFR from a "toolbox" approach and outlining the key activities in a structured DFR process.
Authors: (i) Prashanth Lakshmi Narasimhan,
(ii) Mukesh Ravichandran
Industry: Automobile -Auto Ancillary Equipment ( Turbocharger)
This was presented after the completion of our 2 months internship at Turbo Energy Limited during our 3rd Year Summer holidays (2013)
The document outlines topics covered in "The Impala Cookbook" published by Cloudera. It discusses physical and schema design best practices for Impala, including recommendations for data types, partition design, file formats, and block size. It also covers estimating and managing Impala's memory usage, and how to identify the cause when queries exceed memory limits.
Nancy Regan was a first lady who served from 1981 to 1989 during her husband Ronald Reagan's presidency. She advocated for raising awareness of drug and alcohol abuse and launched the "Just Say No" anti-drug campaign. The document is a biography of Nancy Regan's life and accomplishments as first lady over the course of 9 pages.
This document discusses the importance of equipment experts in applying Reliability Centered Maintenance (RCM) when historical failure data is lacking. It states that while historical data is useful for identifying problem areas, it does not provide all necessary failure information. Equipment experts possess valuable knowledge about potential failures, failures prevented by current maintenance, and how failures impact equipment. An RCM analysis should utilize a facilitated working group approach to capture the expertise and experience of equipment experts to develop an effective maintenance plan.
While templates can potentially speed up RCM analysis, they require great care and caution if used. Templates often lack necessary context about the operating environment and failure modes may need to be rewritten. It may actually take more time to review and rewrite templated data than starting from scratch. Therefore, templates should only be used if led by an experienced RCM specialist and should not replace engaging directly with equipment experts, who provide invaluable insight into vulnerabilities and solutions.
RCM is a process used to identify what Preventive Maintenance or Condition Based Maintenance you need to implement so you get the Reliability you need from your equipment.
Doing Reliability Centered Maintenance (RCM) helps us take care of our equipment. And, taking care of our equipment is very much like taking care of ourselves.
The document discusses how Reliability Centered Maintenance (RCM) is more than just formulating a proactive maintenance plan. RCM considers all elements that can affect a system's reliability, including operating procedures, equipment design, and plausible failure modes. Identifying these broader failure causes allows solutions that go beyond typical maintenance to improve overall equipment reliability.
RCM principles were originally developed for the airline industry but were intended to be applied more broadly to equipment in any industry. While RCM has sometimes been incorrectly thought to only apply to aircraft, the authors of the first RCM book stated that RCM techniques can be learned and applied to complex equipment in other industries as well. Since the 1970s, RCM has been successfully used across many different industries worldwide to optimize equipment maintenance.
Reliability Centered Maintenance (RCM) focuses organizations on clearly identifying the essential functions of machines to avoid chronic downtime. The first step of RCM, writing functional requirements, ensures machines are only asked to perform tasks they are capable of. Mastering the basics of maintenance and reliability through RCM is increasingly important as technology advances and can help organizations avoid issues that stem from machines not being able to perform required functions from the outset.
The document cautions against blindly following manufacturer recommended maintenance schedules, as the manufacturer does not consider important factors like the operating environment and frequency of equipment use. It recommends sanity-checking maintenance schedules against your specific operating environment and equipment usage, as these can significantly impact maintenance needs compared to manufacturer defaults. Proactive maintenance should be tailored and not treated as a one-size-fits-all approach.
Nancy Regan advises that a criticality analysis is not required to identify RCM candidates when first starting out with RCM. The best assets to analyze initially are those already known to be causing issues like chronic downtime, high costs, or maintenance needs in order to quickly prove the value of RCM. Showing stakeholders meaningful results from addressing high problem equipment will gain support to analyze additional systems through repeated RCM cycles.
There are three ways to conduct Reliability Centered Maintenance (RCM) analysis: outsourcing, single analyst method, and facilitated working group approach. Outsourcing and the single analyst method only satisfy one of the two main ingredients for a successful analysis, which are first-hand knowledge of the asset and understanding of the RCM process. The facilitated working group approach brings together equipment experts led by an RCM facilitator to complete the analysis, and is therefore best as it satisfies both main ingredients.
The maintainer in an RCM analysis meeting had a great idea for a new tool that would make a maintenance task safer and easier. However, when asked why he hadn't suggested it before, he responded that management never listened to his other ideas and didn't care what he thought, so he stopped trying. The document argues that equipment experts are an untapped resource who know the vulnerabilities and solutions, and that organizations should ask for their expertise through activities like RCM analysis to transform themselves.
This document discusses 5 myths about Reliability Centered Maintenance (RCM). It clarifies that RCM is a process to identify failure management strategies, not just a maintenance program. It also notes that RCM is a 7-step process that includes performing a Failure Modes and Effects Analysis in steps 1-4, and considering Condition Based Maintenance in step 6. Additionally, the document states that RCM does not need to be applied to all assets, and can be tailored based on reliability goals. It concludes by asserting that RCM has been successfully applied across many industries worldwide.
This document discusses condition-based maintenance (CBM). CBM allows potential failure conditions to be detected early enough to safely manage the consequences. It does not prevent failure, but provides time to address issues like landing an aircraft or replacing a part before further damage occurs. The key factor for determining CBM task intervals is the potential failure-to-failure (P-F) interval, which is the time from when failure is first detectable to when it actually occurs. CBM is powerful because it allows impending failures to be identified before they happen, providing time for proactive actions to control the consequences of failure.
This document provides an overview of Lean Manufacturing and how it can help businesses. It discusses three common problems in business - wasted effort and resources, using wrong business processes, and wide process variation. Lean Manufacturing tools can address these problems by eliminating waste, standardizing processes, and reducing variation. The document then explains several Lean concepts and tools, such as value stream mapping, just-in-time production, standard work, and visual management systems. The overall goal of Lean is to optimize efficiency and effectiveness in business operations.
An overview of the basic process to create an ALT using one of 6 different approaches. Slides used for presentation to the ASQ Silicon Valley evening meeting on Nov 15th 2017.
We work on projects to improve reliability. There may not be the field data immediately available. Let’s explore what you can do to improve the overall program while delivering on your project. Specifically, what’s with cost and procurement?
Detailed Information: As a reliability professional we often work with a team focused on improving the reliability of single product or system. We work with the resources and capabilities of the organization. For me a reliability project is one product or line, a program is the entire organization and lifecycle. We bring specific tools and knowledge, yet rely on the overall reliability culture of an organization to be successful
The overall reliability program may or may not have the field data, root cause analysis and other element of information that allow us to effectively solve problems for a specific project. In some cases we have to work to improve the overall program while striving to create a reliable product. Let’s explore what you should do when you are building a reliability model for a new project and would like to use previous reliability history.
If the data is not available what do you do? What are your options? Let’s discuss what happens when the procurement team consistently selects the least expensive and least reliable components. What are your options? You can and should change the way entire departments do business, for the good of the project and the organization. Let’s discuss the scope of your role as a reliability engineer.
This Accendo Reliability webinar originally broadcast on 19 May 2015.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
1. ARS, North America 2007
San Diego, California USA
Track 2, Session 9
Trapped by MTBF
Fred Schenkelberg
Ops A La Carte, LLC
2. Introduction
How does your organization talk about
Reliability?
How do your customers talk about
Reliability?
Applied Reliability Symposium, North America 2007
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 2
3. Outline
MTBF – calculation
MTBF – a very poor four letter acronym
History of Use
It’s Misleading
A Better Metric
Applied Reliability Symposium, North America 2007
Actually, we’ve been talking about MTTF
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 3
4. MTBF Calculation
# hours
MTBF =
# failures
Applied Reliability Symposium, North America 2007
MTBF = 1
λ
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 4
5. Mean (M)
The mean in MTBF
What does it mean to you?
(no pun intended!)
Applied Reliability Symposium, North America 2007
Average?
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 5
6. Start 1000 units, MTBF = 100 hrs
1200
1000
368 still alive at 101 hours
800
Applied Reliability Symposium, North America 2007
600
400
200
0
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 6
7. Note the exponential decay
1200
1000
800
Applied Reliability Symposium, North America 2007
600
400
200
0
1 21 41 61 81 101 121 141 161 181 201 221 241 261 281
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 7
8. Other Issues
Time – just because it is hours…
Between – note the duration of the failure-
free period!
Applied Reliability Symposium, North America 2007
Failure – use the customer definition
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 8
9. History of Use
Remember Slide Rule and Mechanical
Adding Machines
Victor Adding Machine
Applied Reliability Symposium, North America 2007
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 9
10. History of Use
Early Parts Count based on adding failure
rates of components (60’s and early 70’s)
− λ1t − λ 2t − λ nt
Applied Reliability Symposium, North America 2007
R(t ) = e •e • • e
− ( λ1 + λ 2 + + λ n ) t
R(t ) = e
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 10
11. Beta = 0.63
Depth Cut Response data
Weibull Probability Plot
.5 Weibull Distribution ML Fit
Exponential Distribution ML Fit
.3
95% Pointwise Confidence Intervals
.2
.1
.05
.03
Applied Reliability Symposium, North America 2007
Fraction Failing
.02
.01
.005
.003
.001
.0005
.0003
.0002
.0001
10^-01 10^00 10^01 10^02 10^03 10^04
DEPTH.CUT
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 11
12. Beta = 1.97
test7.df data
Weibull Probability Plot
.7 Weibull Distribution ML Fit
.3 Exponential Distribution ML Fit
95% Pointwise Confidence Intervals
.1
.03
.01
.003
Applied Reliability Symposium, North America 2007
.001
Fraction Failing
.0003
.0001
.00003
.00001
.000003
.000001
.0000003
.0000001
.00000003
.00000001
1 10 100 1000 10000 100000
Depth In
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 12
13. Use Reliability
R(t) is the probability that a random unit
drawn from the population will still be
operating by t hours
R(t) is the fraction of all units in the
population that will survive by t hours
Applied Reliability Symposium, North America 2007
Applied Reliability, 2nd Ed., pg 29
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 13
14. The four (five) elements
Function
Duration
Probability
Environment
Applied Reliability Symposium, North America 2007
They all change over time
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 14
15. Use better models/distributions
Weibull −( t )β
η
Type I Gumbel RWeibull (t ) = e
Exponential − ( et )
Lognormal
RGumbel (t ) = e
RExponential ( t ) = e − λt
Etc.
Applied Reliability Symposium, North America 2007
t
ln
T
50
Rlognormal (t ) = 1 − Φ
σ
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 15
16. Other metric
What is the cost of a field failure?
Warranty $ per unit shipped
Returns/field failure $ per unit shipped
Applied Reliability Symposium, North America 2007
What else could you use?
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 16
17. Actually…
MTBF is or should be used for repairable
systems
MTTF is what I’ve been talking about
MTTF is calculated the same way as MTBF
when we assume
Applied Reliability Symposium, North America 2007
Negligible repair time
Interarrival times as from an independent sample of
non-repairable parts
Exponential distribution for lifetime of parts
See Chap 10, Applied Reliability for more info
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 17
18. Summary
MTBF
Applied Reliability Symposium, North America 2007
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 18
19. Where to Get More Information
Tobias, Paul A. and Trindade, David C.,
Applied Reliability, 2nd Ed. Chapman & Hall,
New York, 1995.
“The Limitations of Using the MTTF as a
Reliability Specification” Reliability Edge, Qtr 2,
Applied Reliability Symposium, North America 2007
2000, Vol 1, Issue 1.
Ops A La Carte, LLC provides a full range of
reliability engineering services including
assessments, FMEA facilitation, HALT and ALT
testing, data analysis and customized training.
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 19
20. Presenter’s Biographical Sketch
Fred Schenkelberg, Consultant
Independent Reliability Engineering and
Management Consultant for past 3 years.
Previously at HP Corporate Reliability
Engineering Program for 5 years.
MS Statistics Stanford, BS Physics USMA
Applied Reliability Symposium, North America 2007
fms@opsalacarte.com
(408) 710-8248
www.opsalacarte.com
Fred Schenkelberg, Ops A La Carte, LLC Track 2 Session 9 Slide Number: 20