This document provides an overview of reliability functions for life testing in Minitab. It discusses selecting a probability distribution, testing units to failure with right or arbitrary censoring, accelerated testing with single or multiple factors, and identifying the best-fitting distribution. The agenda includes introductions to reliability, probability distribution functions, parameters, censoring, distribution identification, and a case study of wire scrape testing to identify the best-fitting distribution for cable life data.
Objectives
To understand Weibull distribution
To be able to use Weibull plot for failure time analysis and
diagnosis
To be able to use software to do data analysis
Organization
Distribution model
Parameter estimation
Regression analysis
The document discusses using Weibull probability plots to analyze light bulb lifespan data. Engineers tested bulbs by stressing them beyond normal conditions to simulate long-term use and recorded failure times. A Weibull plot of the failure percentage against time shows the characteristic life (63.2% failure point) and shape factor. Conclusions note that to guarantee bulbs for 10 years, the characteristic life must be much longer than 10 years to keep failure rates acceptably low.
This document discusses Weibull analysis, which is commonly used in reliability engineering. The Weibull distribution can take on many shapes depending on the value of the β parameter. Weibull analysis is useful for mechanical reliability due to its versatility. The document defines the Weibull probability density function and describes how it is used to derive reliability metrics like failure rate and mean time to failure. Examples are provided to demonstrate how Weibull analysis can be used to determine failure percentages and mean time to failure for products.
Design for reliability (DFR) is an industry-wide practice and a philosophy of considering reliability in an early stage of product design and development, to achieve a highly-reliable product while with sustainable cost. Physical of Failure (PoF) is recognized as a key approach of implementing DFR in a product design and development process. The author will present a case study to illustrate predicting and identifying product failure early in the design phase with the help of a quantitative PoF model based analysis tool.
Accelerated life testing plans are designed under multiple objective consideration, with the resulting Pareto optimal solutions classified and reduced using neural network and data envelopement analysis, respectively.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 3. Failure Time Distributions
1.Constant failure rate distributions
2.Increasing failure rate distributions
3.Decreasing failure rate distributions
4.Weibull Analysis – Why use Weibull?
Accelerated Life Testing (ALT) is a lifetime prediction methodology commonly used by the industry in the past decades. This method , however, is reaching its limitations with the development of products within emerging technologies requiring long-term reliability. At TNO we work on technology development with long expected lifetimes , e.g. solar cells and LED lighting.
New methodologies are required to predict long term reliability for these type of products. Methods to predict long term reliability by extending ALT methods, like HALT (Highly Accelerated Life Testing) and MEOST (Multiple Environmental Stress Testing) will be discussed in the presentation.
A problem in application of these methods is definition of adequate stress profiles. It is our experience that to gain benefit from accelerated testing, insight in the Physic of Failure of a product is essential.
Introduction to the guide of uncertainty in measurementMaurice Maeck
This document provides an introduction to measurement uncertainty and the Guide to the Expression of Uncertainty in Measurement (GUM). It discusses key concepts such as measurement uncertainty, types of uncertainty evaluation, and the stages of the uncertainty evaluation process. The formulation stage involves expressing the measurement mathematically in terms of input quantities and their estimates and uncertainties. The calculation stage determines the measurement result, its combined uncertainty, and expanded uncertainty. Examples are provided on probability distribution functions, the central limit theorem, and the t-distribution.
Objectives
To understand Weibull distribution
To be able to use Weibull plot for failure time analysis and
diagnosis
To be able to use software to do data analysis
Organization
Distribution model
Parameter estimation
Regression analysis
The document discusses using Weibull probability plots to analyze light bulb lifespan data. Engineers tested bulbs by stressing them beyond normal conditions to simulate long-term use and recorded failure times. A Weibull plot of the failure percentage against time shows the characteristic life (63.2% failure point) and shape factor. Conclusions note that to guarantee bulbs for 10 years, the characteristic life must be much longer than 10 years to keep failure rates acceptably low.
This document discusses Weibull analysis, which is commonly used in reliability engineering. The Weibull distribution can take on many shapes depending on the value of the β parameter. Weibull analysis is useful for mechanical reliability due to its versatility. The document defines the Weibull probability density function and describes how it is used to derive reliability metrics like failure rate and mean time to failure. Examples are provided to demonstrate how Weibull analysis can be used to determine failure percentages and mean time to failure for products.
Design for reliability (DFR) is an industry-wide practice and a philosophy of considering reliability in an early stage of product design and development, to achieve a highly-reliable product while with sustainable cost. Physical of Failure (PoF) is recognized as a key approach of implementing DFR in a product design and development process. The author will present a case study to illustrate predicting and identifying product failure early in the design phase with the help of a quantitative PoF model based analysis tool.
Accelerated life testing plans are designed under multiple objective consideration, with the resulting Pareto optimal solutions classified and reduced using neural network and data envelopement analysis, respectively.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 3. Failure Time Distributions
1.Constant failure rate distributions
2.Increasing failure rate distributions
3.Decreasing failure rate distributions
4.Weibull Analysis – Why use Weibull?
Accelerated Life Testing (ALT) is a lifetime prediction methodology commonly used by the industry in the past decades. This method , however, is reaching its limitations with the development of products within emerging technologies requiring long-term reliability. At TNO we work on technology development with long expected lifetimes , e.g. solar cells and LED lighting.
New methodologies are required to predict long term reliability for these type of products. Methods to predict long term reliability by extending ALT methods, like HALT (Highly Accelerated Life Testing) and MEOST (Multiple Environmental Stress Testing) will be discussed in the presentation.
A problem in application of these methods is definition of adequate stress profiles. It is our experience that to gain benefit from accelerated testing, insight in the Physic of Failure of a product is essential.
Introduction to the guide of uncertainty in measurementMaurice Maeck
This document provides an introduction to measurement uncertainty and the Guide to the Expression of Uncertainty in Measurement (GUM). It discusses key concepts such as measurement uncertainty, types of uncertainty evaluation, and the stages of the uncertainty evaluation process. The formulation stage involves expressing the measurement mathematically in terms of input quantities and their estimates and uncertainties. The calculation stage determines the measurement result, its combined uncertainty, and expanded uncertainty. Examples are provided on probability distribution functions, the central limit theorem, and the t-distribution.
Weibull Analysis is an important tool for Reliability Engineering. It can be used verifying the design life at component level, comparing two designs and warranty analysis.
This document provides an introduction to uncertainty measurements. It explains that there is always uncertainty involved when taking measurements as values can vary depending on factors like how, when and where something is measured. It describes the two main sources of uncertainty as random errors which are unpredictable, and systematic errors which are constant. The document then outlines the process for calculating uncertainty which involves taking multiple readings to determine standard deviation and uncertainty (Type A), and combining various uncertainty components from calibration certificates and manufacturers alongside sensitivity coefficients to determine combined and expanded uncertainty using a normal distribution with 95% confidence (Type B). It emphasizes that the process is the same for measurements in decibels or real numbers but the calculations are different.
The document summarizes Rahul Singh's seminar presentation on reliability. It defines reliability as the ability of a product to perform as expected over time, with a probability between 0 and 1 under specified conditions. There are two types of failures: functional and reliability. Reliability is measured through failure rate and other metrics. Products go through debugging, chance failure, and wear-out phases as shown in the bathtub curve. Exponential and Weibull distributions model failure rates. System reliability depends on components arranged in series, parallel or both. Life testing plans include failure-terminated, time-terminated and sequential tests.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 1. Reliability Definitions
1.Reliability---Time dependent characteristic
2.Failure rate
3.Mean Time to Failure
4.Availability
5.Mean residual life
This document discusses practical approaches for estimating measurement uncertainty in environmental laboratories. It describes two main approaches provided by the NORDTEST handbook: 1) combining reproducibility within the laboratory and estimates of method and laboratory bias, and 2) using reproducibility between laboratories. It then provides detailed guidance on estimating various components of uncertainty, including reproducibility within the laboratory using control samples, accounting for different matrices and concentration levels, and estimating bias using reference materials and proficiency testing. Flowcharts and examples are provided to illustrate the processes.
You’ve heard about Weibull Analysis, and want to know what it can be used for, OR you’ve used Weibull Analysis in the past, but have forgotten some of the background and uses….
This webinar looks at giving you the background of Weibull Analysis, and its use in analyzing failure modes. Starting from basics and giving examples of its uses in answering the questions:
• How many do I test, for how long?
• Is our design system wrong?
• How many more failures will I have in the next month, year, 5 years?
Sit in and listen and ask your questions … not detailed “How to” but “When & Why to”!
The document discusses various techniques for designing products for reliability, including derating components, accelerated life testing, and reliability estimation methods. It describes how reliability modeling should guide the design process from the beginning to design out potential failure mechanisms. The goal is to develop longer-lived products through an iterative approach of testing, analyzing failures, and redesigning to improve reliability. Key aspects of a reliability-focused design process include understanding failure mechanisms, developing reliability databases, and using super-accelerated life testing techniques.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 2. Reliability Calculations
1.Use of failure data
2.Density functions
3.Reliability function
4.Hazard and failure rates
This document discusses measurement uncertainty. It defines measurement uncertainty as a parameter included with any measurement result that accounts for possible errors. It describes sources of uncertainty like sampling, storage conditions, and personal effects. The document outlines methods of calculating uncertainty using the standard deviation, and explains why assessing uncertainty is important for interpreting results and ensuring measurement quality. Measurement uncertainty is a key component of any measurement result.
Objectives
To provide an introduction to the statistical analysis of
failure time data
To discuss the impact of data censoring on data analysis
To demonstrate software tools for reliability data analysis
Organization
Reliability definition
Characteristics of reliability data
Statistical analysis of censored reliability data
E384.23604 Microindentation Hardness of Materials.pdfmahmoodkhan77
This document describes Standard Test Method E384 for determining microindentation hardness of materials. It defines the scope as determining hardness using Knoop or Vickers indenters under forces from 9.8x10-3 to 9.8 N. The test method includes analysis of potential sources of error and requirements for machine verification. Hardness is calculated based on dividing applied force by the projected or surface area of the resulting indentation, as measured microscopically. Factors affecting precision of results are discussed.
This seminar session provides an overview of major aspects of reliability engineering, including general introduction of reliability engineering (definition of reliability, function of reliability engineering, a brief history of reliability, etc.), reliability basics (metrics used in reliability, commonly-used probability distributions in reliability, bathtub curve, reliability demonstration test planning, confidence intervals, Bayesian statistics application in reliability, strength-stress interference theory, etc.), accelerated life testing (ALT) (types of ALT, Arrhenius model, inverse power law model, Eyring model, temperature-humidity model, etc.), reliability growth (reliability-based growth models, MTBF-based growth model, etc.), systems reliability & availability (reliability block diagram, non-repairable or repairable systems, reliability modeling of series systems, parallel systems, standby systems, and complex systems, load sharing reliability, reliability allocation, system availability, Monte Carlo simulation, etc.), and degradation-based reliability (introduction of degradation-based reliability, difference between traditional reliability and degradation-based reliability, etc.).
The document discusses design for reliability (DFR) topics including the need for DFR, the DFR process, terminology, Weibull plotting, system reliability, DFR testing, and accelerated testing. It provides details on the DFR process, common reliability terminology such as reliability, failure rate, mean time to failure, and the bathtub curve. It also explains the exponential distribution and Weibull plotting, which are important reliability analysis tools.
This document discusses measurement system analysis (MSA) and gauge repeatability and reproducibility (R&R) studies. MSA is used to evaluate different aspects of a measurement system like bias, linearity, stability, repeatability and reproducibility. R&R studies focus specifically on repeatability and reproducibility. Key terms are defined, including bias, repeatability, reproducibility, stability, linearity, attribute R&R parameters like effectiveness, misses, false alarms, and bias, and how to analyze variable measurement data using analysis of variance. Guidelines for acceptable levels of R&R parameters are also provided.
Estimation of Measurement Uncertainty in Labs: a requirement for ISO 17025 Ac...PECB
Knowledge of the uncertainty of measurement of testing and calibration results is fundamentally important for laboratories, their clients and all institutions using these results for comparative purposes. Uncertainty of measurement is a very important metric of the quality of a result or a testing method.
Main points covered:
• To introduce the basic concepts related to measurement results and measurement uncertainty
• Explain the relevance of these concepts to chemical analysis data
• Introduce mathematical concepts, uncertainty sources and important approaches for estimation of measurement uncertainty
Presenter:
This webinar was presented by Dotun Bolade, who is an Analytical Chemist/Environmental Scientist by training and practice with years of experience in laboratory instrumentation and automation. For him, ISO management systems have become second nature having worked in environments where ISO 9001, 14001, 18001 and 17025 have been fully implemented. He is a Certified PECB ISO/IEC 17025 Lead Assessor.
Link of the recorded session published on YouTube: https://youtu.be/AOpFou7_FVI
The document discusses Failure Mode and Effects Analysis (FMEA) and how to conduct a Process FMEA, including defining the scope, identifying potential failures and their causes and effects, and establishing current process controls. It provides examples and templates to help participants understand how to properly perform a Process FMEA. The goal is to enable participants to effectively use FMEA to achieve robust capable designs and processes.
This document provides standards for reference radiographs used to evaluate steel castings up to 2 inches thick. It includes:
1. An overview of the scope and purpose of the reference radiographs, which illustrate various types and severity levels of discontinuities commonly found in steel castings.
2. Descriptions of the categories and types of discontinuities represented, including gas porosity, inclusions, shrinkage, cracks, tears, and mottling. The discontinuities are graded on a scale of 1 to 5 based on their quantity, size, and distribution.
3. Procedures for how to use the reference radiographs to evaluate production radiographs and determine whether castings meet specified radi
This document is a reference manual for measurement systems analysis (MSA) published by the Automotive Industry Action Group (AIAG). It is copyrighted and was licensed to Magna International. The manual provides guidelines for evaluating the capability and accuracy of measurement systems used in automotive manufacturing. It describes methods for determining the stability, bias, linearity, repeatability, and reproducibility of variable measurement systems as well as methods for attribute measurement systems. The guidelines are intended to help users understand the factors that influence measurement systems and determine if their level of variation is acceptable for use.
This presentation will cover the basics and differences between self-contained and transformer or instrument rated meter sites. Also discussed are transformer rated meter forms, test switches and CT's, Blondel's Theorem and why this matters to metering, meter accuracy testing in the field, checking the health of your CT's and PT's, and Site Verification (and not just meter testing).
The document provides guidance on taking current attenuation measurements to locate defects on pipelines:
- Measurements should be taken at regular intervals, such as every 50 feet, and both peak and null readings should be recorded to identify any anomalies greater than a 5% change.
- An A-frame accessory is used with the locator in ACVG mode to pinpoint defects by tracking the direction of the voltage gradient around coating defects.
- Readings need to be normalized based on the pipeline current level by using a provided formula to account for the effect of current level on the voltage reading.
- The document introduces the new PCMx system from Radiodetection for pipeline surveying, which features a lighter
Weibull Analysis is an important tool for Reliability Engineering. It can be used verifying the design life at component level, comparing two designs and warranty analysis.
This document provides an introduction to uncertainty measurements. It explains that there is always uncertainty involved when taking measurements as values can vary depending on factors like how, when and where something is measured. It describes the two main sources of uncertainty as random errors which are unpredictable, and systematic errors which are constant. The document then outlines the process for calculating uncertainty which involves taking multiple readings to determine standard deviation and uncertainty (Type A), and combining various uncertainty components from calibration certificates and manufacturers alongside sensitivity coefficients to determine combined and expanded uncertainty using a normal distribution with 95% confidence (Type B). It emphasizes that the process is the same for measurements in decibels or real numbers but the calculations are different.
The document summarizes Rahul Singh's seminar presentation on reliability. It defines reliability as the ability of a product to perform as expected over time, with a probability between 0 and 1 under specified conditions. There are two types of failures: functional and reliability. Reliability is measured through failure rate and other metrics. Products go through debugging, chance failure, and wear-out phases as shown in the bathtub curve. Exponential and Weibull distributions model failure rates. System reliability depends on components arranged in series, parallel or both. Life testing plans include failure-terminated, time-terminated and sequential tests.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 1. Reliability Definitions
1.Reliability---Time dependent characteristic
2.Failure rate
3.Mean Time to Failure
4.Availability
5.Mean residual life
This document discusses practical approaches for estimating measurement uncertainty in environmental laboratories. It describes two main approaches provided by the NORDTEST handbook: 1) combining reproducibility within the laboratory and estimates of method and laboratory bias, and 2) using reproducibility between laboratories. It then provides detailed guidance on estimating various components of uncertainty, including reproducibility within the laboratory using control samples, accounting for different matrices and concentration levels, and estimating bias using reference materials and proficiency testing. Flowcharts and examples are provided to illustrate the processes.
You’ve heard about Weibull Analysis, and want to know what it can be used for, OR you’ve used Weibull Analysis in the past, but have forgotten some of the background and uses….
This webinar looks at giving you the background of Weibull Analysis, and its use in analyzing failure modes. Starting from basics and giving examples of its uses in answering the questions:
• How many do I test, for how long?
• Is our design system wrong?
• How many more failures will I have in the next month, year, 5 years?
Sit in and listen and ask your questions … not detailed “How to” but “When & Why to”!
The document discusses various techniques for designing products for reliability, including derating components, accelerated life testing, and reliability estimation methods. It describes how reliability modeling should guide the design process from the beginning to design out potential failure mechanisms. The goal is to develop longer-lived products through an iterative approach of testing, analyzing failures, and redesigning to improve reliability. Key aspects of a reliability-focused design process include understanding failure mechanisms, developing reliability databases, and using super-accelerated life testing techniques.
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 2. Reliability Calculations
1.Use of failure data
2.Density functions
3.Reliability function
4.Hazard and failure rates
This document discusses measurement uncertainty. It defines measurement uncertainty as a parameter included with any measurement result that accounts for possible errors. It describes sources of uncertainty like sampling, storage conditions, and personal effects. The document outlines methods of calculating uncertainty using the standard deviation, and explains why assessing uncertainty is important for interpreting results and ensuring measurement quality. Measurement uncertainty is a key component of any measurement result.
Objectives
To provide an introduction to the statistical analysis of
failure time data
To discuss the impact of data censoring on data analysis
To demonstrate software tools for reliability data analysis
Organization
Reliability definition
Characteristics of reliability data
Statistical analysis of censored reliability data
E384.23604 Microindentation Hardness of Materials.pdfmahmoodkhan77
This document describes Standard Test Method E384 for determining microindentation hardness of materials. It defines the scope as determining hardness using Knoop or Vickers indenters under forces from 9.8x10-3 to 9.8 N. The test method includes analysis of potential sources of error and requirements for machine verification. Hardness is calculated based on dividing applied force by the projected or surface area of the resulting indentation, as measured microscopically. Factors affecting precision of results are discussed.
This seminar session provides an overview of major aspects of reliability engineering, including general introduction of reliability engineering (definition of reliability, function of reliability engineering, a brief history of reliability, etc.), reliability basics (metrics used in reliability, commonly-used probability distributions in reliability, bathtub curve, reliability demonstration test planning, confidence intervals, Bayesian statistics application in reliability, strength-stress interference theory, etc.), accelerated life testing (ALT) (types of ALT, Arrhenius model, inverse power law model, Eyring model, temperature-humidity model, etc.), reliability growth (reliability-based growth models, MTBF-based growth model, etc.), systems reliability & availability (reliability block diagram, non-repairable or repairable systems, reliability modeling of series systems, parallel systems, standby systems, and complex systems, load sharing reliability, reliability allocation, system availability, Monte Carlo simulation, etc.), and degradation-based reliability (introduction of degradation-based reliability, difference between traditional reliability and degradation-based reliability, etc.).
The document discusses design for reliability (DFR) topics including the need for DFR, the DFR process, terminology, Weibull plotting, system reliability, DFR testing, and accelerated testing. It provides details on the DFR process, common reliability terminology such as reliability, failure rate, mean time to failure, and the bathtub curve. It also explains the exponential distribution and Weibull plotting, which are important reliability analysis tools.
This document discusses measurement system analysis (MSA) and gauge repeatability and reproducibility (R&R) studies. MSA is used to evaluate different aspects of a measurement system like bias, linearity, stability, repeatability and reproducibility. R&R studies focus specifically on repeatability and reproducibility. Key terms are defined, including bias, repeatability, reproducibility, stability, linearity, attribute R&R parameters like effectiveness, misses, false alarms, and bias, and how to analyze variable measurement data using analysis of variance. Guidelines for acceptable levels of R&R parameters are also provided.
Estimation of Measurement Uncertainty in Labs: a requirement for ISO 17025 Ac...PECB
Knowledge of the uncertainty of measurement of testing and calibration results is fundamentally important for laboratories, their clients and all institutions using these results for comparative purposes. Uncertainty of measurement is a very important metric of the quality of a result or a testing method.
Main points covered:
• To introduce the basic concepts related to measurement results and measurement uncertainty
• Explain the relevance of these concepts to chemical analysis data
• Introduce mathematical concepts, uncertainty sources and important approaches for estimation of measurement uncertainty
Presenter:
This webinar was presented by Dotun Bolade, who is an Analytical Chemist/Environmental Scientist by training and practice with years of experience in laboratory instrumentation and automation. For him, ISO management systems have become second nature having worked in environments where ISO 9001, 14001, 18001 and 17025 have been fully implemented. He is a Certified PECB ISO/IEC 17025 Lead Assessor.
Link of the recorded session published on YouTube: https://youtu.be/AOpFou7_FVI
The document discusses Failure Mode and Effects Analysis (FMEA) and how to conduct a Process FMEA, including defining the scope, identifying potential failures and their causes and effects, and establishing current process controls. It provides examples and templates to help participants understand how to properly perform a Process FMEA. The goal is to enable participants to effectively use FMEA to achieve robust capable designs and processes.
This document provides standards for reference radiographs used to evaluate steel castings up to 2 inches thick. It includes:
1. An overview of the scope and purpose of the reference radiographs, which illustrate various types and severity levels of discontinuities commonly found in steel castings.
2. Descriptions of the categories and types of discontinuities represented, including gas porosity, inclusions, shrinkage, cracks, tears, and mottling. The discontinuities are graded on a scale of 1 to 5 based on their quantity, size, and distribution.
3. Procedures for how to use the reference radiographs to evaluate production radiographs and determine whether castings meet specified radi
This document is a reference manual for measurement systems analysis (MSA) published by the Automotive Industry Action Group (AIAG). It is copyrighted and was licensed to Magna International. The manual provides guidelines for evaluating the capability and accuracy of measurement systems used in automotive manufacturing. It describes methods for determining the stability, bias, linearity, repeatability, and reproducibility of variable measurement systems as well as methods for attribute measurement systems. The guidelines are intended to help users understand the factors that influence measurement systems and determine if their level of variation is acceptable for use.
This presentation will cover the basics and differences between self-contained and transformer or instrument rated meter sites. Also discussed are transformer rated meter forms, test switches and CT's, Blondel's Theorem and why this matters to metering, meter accuracy testing in the field, checking the health of your CT's and PT's, and Site Verification (and not just meter testing).
The document provides guidance on taking current attenuation measurements to locate defects on pipelines:
- Measurements should be taken at regular intervals, such as every 50 feet, and both peak and null readings should be recorded to identify any anomalies greater than a 5% change.
- An A-frame accessory is used with the locator in ACVG mode to pinpoint defects by tracking the direction of the voltage gradient around coating defects.
- Readings need to be normalized based on the pipeline current level by using a provided formula to account for the effect of current level on the voltage reading.
- The document introduces the new PCMx system from Radiodetection for pipeline surveying, which features a lighter
2 Parameter vs. 3 Parameter Weibull with a Cable Flex TestRob Schubert
This presentation compares 2-parameter and 3-parameter Weibull analysis on cable flex test data. Using both real cable test data and generated Weibull data, the presenters found that the 3-parameter model generally provided a better fit. With larger sample sizes, the confidence intervals on the threshold parameter decreased. While 2-parameter analysis often resulted in a steeper estimated slope, 3-parameter analysis was found to better characterize the cable data, especially with smaller sample sizes. The presenters concluded that 3-parameter Weibull analysis is preferable for analyzing cable flex test results.
CS Analyst™ allows you to rapidly compute voltage and current induced by energy coupled to power and signal wiring by low frequency electromagnetic fields and injected interference.
The document provides an overview of partial discharge (PD) measurement procedures using an online wave tracking system (OWTS). It discusses several key steps:
1. Ensuring a good measurement connection is critical to avoid noise and detect PDs. Proper earthing and shielding of cables is important.
2. Calibration determines the cable's propagation velocity for PD localization and can identify joints. It is important for accurate results.
3. Measurement involves systematically increasing voltage from 0kV to 1.3U0 while recording PD inception (PDIV) and extinction (PDEV) voltages. Multiple readings are averaged.
4. The quick overview automatically maps PDs but manual evaluation is still needed to analyze concentrations and
In this presentation, the topics covered include: differences between self contained and transformer or instrument rated meter sites; transformer rated meter forms; test switches and CT's; Blondel's Theorem and why this matters to us in metering; meter accuracy testing in the field; checking the health of your CT's and PT's; and site verification.
This document provides information on Fluke clamp meters for various applications. It describes the job functions and applications for different types of electricians and maintenance professionals. For each job function, it recommends certain Fluke clamp meter models based on their key features. It provides specifications for different clamp meter models to help users select the right one for their needs. In the last section, it compares the features of the clamp meter models in a table for easy reference.
This presentation covers the differences between self contained and transformer or instrument rated meter sites, transformer rated meter forms, test switches and CT's, Blondel's Theorem and why it matters to metering, meter accuracy testing in the field, checking the health of your CT's and PT's, and site verification (not just meter testing).
ECC EST-300 Series Next-Generation Hipot Tester - High Output Rating As Compa...Aimil Ltd
The EST-300 next-generation Hipot tester combines industry-leading compact size with Unbeatable performance. This new series economically integrates AC, DC withstand, and
Insulation resistance into a single solution. The Hipot tester is highly portable and fits readily in any spatially limited testing environment. The EST-300 series provides a robust set of features. That includes high output rating, ARC detection, ramp-high, charge-low, and fast discharge to enhance overall testing efficiency and safety by preventing the common errors and detecting any dielectric breakdowns during tests on the DUT. This Product Originates from Extech Electronics Co., Taiwan and Aimil Ltd. is their major distributors and suppliers in India.
A presenation on a technique that was developed to correlate Adjecent channnel rejection of a VHF reiciever to a new method that involves a much simpler technique that can be deployed on production ATE testers
Pemesanan produk, hubungi PT Siwali Swantika melalui WhatsApp, Jakarta : 0811-1519-949 (chat only) | Surabaya : 0811-1519-948 (chat only). Kunjungi website kami di www.siwali.com, untuk detail informasi spesifikasi dan model alat.
This presentation discusses the differences between self-contained and transformer or instrument rated meter sites; transformer rated meter forms; test switches and CTs; meter accuracy testing in the field; checking the health of your CTs and PTs; and Site Verification. This presentation was given at the MEUA Meter School. 03/03/20
This document provides an overview of extending the reach of 100 Gb/s multimode parallel optic links. It begins with introductions to fiber optics and standards. Test results are presented for transmitter and receiver modules showing performance over 300m of fiber, exceeding the 100m standard. Specifically, the transmitter results show minimal degradation in spectral width, mask margin, and jitter over fiber. Receiver results demonstrate meeting the mask, jitter and sensitivity specifications over 300m with worst-case transmitters. The conclusions state that 100GbE VCSEL modules can successfully operate over longer distances without significant performance impacts.
This document discusses best practices for meter and instrument transformer testing in an Advanced Metering Infrastructure (AMI) system. It addresses the need to test meters and transformers for accuracy upon installation, return to service, and periodically while in service. Site verification testing is also recommended to check for wiring errors and ensure meters and transformers are properly sized. The document emphasizes that transformer-rated services, which represent a small portion of customers but a large portion of revenue, should be a priority for meter testing resources given their financial impact. AMI data can help identify transformer-rated services for further evaluation and testing.
This document provides steps for properly preparing coaxial cable and terminating connectors for use in CCTV systems. It discusses that transmission media such as cable, connectors and installation methods account for over 65% of failures in CCTV systems. The document then provides 10 detailed steps for stripping cable, attaching connectors, testing connections, and labeling cables to help ensure optimal video quality and troubleshooting. Key steps include properly stripping cable, flaring the braid, crimping the connector, checking resistance values during testing, and labeling both ends of cables.
This presentation will cover the basics and differences between self-contained and transformer or instrument rated meter sites. Also discussed are transformer rated meter forms, test switches and CT's, Blondel's Theorem and why this matters to metering, meter accuracy testing in the field, checking the health of your CT's and PT's, and Site Verification (and not just meter testing).
Data Teknis Gossen Metrawatt Ground Tester : GEOHM PRO & GEOHM XTRAPT. Siwali Swantika
Pemesanan produk, hubungi PT Siwali Swantika melalui WhatsApp, Jakarta : 0811-1519-949 (chat only) | Surabaya : 0811-1519-948 (chat only). Kunjungi website kami di www.siwali.com, untuk detail informasi spesifikasi dan model alat.
Similar to Overview of life testing in Minitab (20)
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
2. Introduction
Shure Inc. “The Most Trusted Audio Brand Worldwide”
Industry: Consumer and professional audio electronics
Founded: 1925
Products: Microphones, wireless microphone systems, headphones and earphones,
mixers, conferencing systems
Rob Schubert - Corporate Quality/Reliability Engineer at Shure Inc.
3. Agenda
• Intro to Reliability
• Selecting a distribution
• Testing to failure – Right censoring
• Testing to failure – Arbitrary censoring
• Accelerated testing – Single Factor
• Accelerated testing – Multiple factors
• Summary
• Questions
4. Quick Intro to Reliability
• Reliability = quality over time
• Field failures
• Failure modes
• Testing to replicate failure modes
• Measuring time to failure
• Attempt to predict failure rate
5. Probability Distribution Function (PDF)
• A function that describes the relative likelihood for
this random variable to take on a given value
• Integral over a range = probability of the variable
falling within that range
Quick Intro to
Reliability
7. Different ways to
visualize the data
Instantaneous probability
Cumulative probability
Straight line cumulative probability
transformed so that the fitted distribution line
forms a straight line
Quick Intro to
Reliability
8. Parameters
Shape = Slope: slope of the line in a cumulative straight
line probability plot
Quick Intro to
Reliability
10. Parameters
Threshold / Location:
The 3rd parameter – shifts the
zero point
Forced to zero in a:
2-Parameter Lognormal
Normally called just “Lognormal“
1-Parameter Exponential
Normally called just “Exponential“
2-Parameter Weibull
Normally called just “Weibull“
2-Parameter Gamma
Normally called just “gamma“
2-Parameter Loglogistic
Normally called just “Loglogistic“
(not represented on straight line
cumulative probability plot)
Quick Intro to
Reliability
11. Testing to Failure:
What is censoring?
• Some tests will not be able to run to the end due to
time constraints
= Right censored data or Suspended data
• Some tests cannot be continuously monitored
= Arbitrary censoring
Quick Intro to
Reliability
14. • What do you know about your data?
Reliability data starts at time=0
• Is it bound?
Bound by zero (or something greater)
• Any historical references?
Weibull, Lognormal, Gumbel (extreme value)
• Finally Does it look like it fits?
Distribution
ID
15. Statistics to look at:
• Anderson-Darling test: How well your data fits the PDF
• Smaller fits better
• p value: Student’s t-test – does your data fit the PDF?
• H0 = data fits the distribution
• p is low, the null must go! i.e. if p<0.05, it doesn’t fit the distribution
• Likelihood ratio test (p value) – does 3 parameter fit better?
• H0 = 2 parameter fits better
• i.e. if <0.05, the 3 parameter fits better
Distribution
ID
16. Example:
Wire Scrape Test
• Cable gets frayed in the field
• Need to replicate
• Scrape at several points
• Measure cycles until breakthrough
Distribution
ID
17. 3 steps to identify the distribution
1. Look at graphs
2. Look at warnings
3. Look at Anderson darling numbers
Distribution
ID
20. 150010005000
99
90
50
10
1
Cable scrape 1.0Kg
Percent
150010005000
99
90
50
10
1
Cable scrape 1.0Kg
Percent
20001000500
99
90
50
10
1
Cable scrape 1.0Kg
Percent
221000220500220000219500
99
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
Lognormal
A D = 0.489
P-V alue = 0.198
3-Parameter Lognormal
A D = 0.224
P-V alue = *
Goodness of Fit Test
Normal
A D = 0.212
P-V alue = 0.833
Box-C ox Transformation
A D = 0.212
P-V alue = 0.833
A fter Box-C ox transformation (lambda = 1)
Probability Plot for Cable scrape 1.0Kg
Normal - 95% C I Normal - 95% C I
Lognormal - 95% C I 3-Parameter Lognormal - 95% C I
10000100010010
90
50
10
1
Cable scrape 1.0Kg
Percent
100001000100101
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
1000500
90
50
10
1
Cable scrape 1.0Kg
Percent
1000500200
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
Weibull
A D = 0.205
P-V alue > 0.250
3-Parameter Weibull
A D = 0.202
P-V alue > 0.500
Goodness of Fit Test
Exponential
A D = 5.590
P-V alue < 0.003
2-Parameter Exponential
A D = 3.064
P-V alue < 0.010
Probability Plot for Cable scrape 1.0Kg
Exponential - 95% C I 2-Parameter Exponential - 95% C I
Weibull - 95% C I 3-Parameter Weibull - 95% C I
150010005000
90
50
10
1
Cable scrape 1.0Kg
Percent
200015001000500
99
90
50
10
Cable scrape 1.0Kg
Percent
20001000500
99
90
50
10
1
Cable scrape 1.0Kg
Percent
3300300027002400
99
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
Gamma
A D = 0.369
P-V alue > 0.250
3-Parameter Gamma
A D = 0.271
P-V alue = *
Goodness of Fit Test
Smallest Extreme V alue
A D = 0.270
P-V alue > 0.250
Largest Extreme V alue
A D = 0.556
P-V alue = 0.152
Probability Plot for Cable scrape 1.0Kg
Smallest Extreme V alue - 95% C I Largest Extreme V alue - 95% C I
Gamma - 95% C I 3-Parameter Gamma - 95% C I
150010005000
99
90
50
10
1
Cable scrape 1.0Kg
Percent
20001000500
99
90
50
10
1
Cable scrape 1.0Kg
Percent
219000218500218000217500
99
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
3-Parameter Loglogistic
A D = 0.213
P-V alue = *
Goodness of Fit Test
Logistic
A D = 0.213
P-V alue > 0.250
Loglogistic
A D = 0.355
P-V alue > 0.250
Probability Plot for Cable scrape 1.0Kg
Logistic - 95% C I Loglogistic - 95% C I
3-Parameter Loglogistic - 95% C I
x
x
x
x x
x
x
Distribution
ID
21. Note the Warnings!
These are probably poor choices
3-Parameter Lognormal
* WARNING * Newton-Raphson algorithm has not converged after 50 iterations.
* WARNING * Convergence has not been reached for the parameter estimates
criterion.
2-Parameter Exponential
* WARNING * Variance/Covariance matrix of estimated parameters does not exist.
The threshold parameter is assumed fixed when calculating
confidence intervals.
3-Parameter Gamma
* WARNING * Newton-Raphson algorithm has not converged after 50 iterations.
* WARNING * Convergence has not been reached for the parameter estimates
criterion.
3-Parameter Loglogistic
* WARNING * Newton-Raphson algorithm has not converged after 50 iterations.
* WARNING * Convergence has not been reached for the parameter estimates
criterion.
Distribution
ID
22. 150010005000
99
90
50
10
1
Cable scrape 1.0Kg
Percent
150010005000
99
90
50
10
1
Cable scrape 1.0Kg
Percent
20001000500
99
90
50
10
1
Cable scrape 1.0Kg
Percent
221000220500220000219500
99
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent Lognormal
A D = 0.489
P-V alue = 0.198
3-Parameter Lognormal
A D = 0.224
P-V alue = *
Goodness of Fit Test
Normal
A D = 0.212
P-V alue = 0.833
Box-C ox Transformation
A D = 0.212
P-V alue = 0.833
A fter Box-C ox transformation (lambda = 1)
Probability Plot for Cable scrape 1.0Kg
Normal - 95% C I Normal - 95% C I
Lognormal - 95% C I 3-Parameter Lognormal - 95% C I
10000100010010
90
50
10
1
Cable scrape 1.0Kg
Percent
100001000100101
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
1000500
90
50
10
1
Cable scrape 1.0Kg
Percent
1000500200
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
Weibull
A D = 0.205
P-V alue > 0.250
3-Parameter Weibull
A D = 0.202
P-V alue > 0.500
Goodness of Fit Test
Exponential
A D = 5.590
P-V alue < 0.003
2-Parameter Exponential
A D = 3.064
P-V alue < 0.010
Probability Plot for Cable scrape 1.0Kg
Exponential - 95% C I 2-Parameter Exponential - 95% C I
Weibull - 95% C I 3-Parameter Weibull - 95% C I
150010005000
90
50
10
1
Cable scrape 1.0Kg
Percent
200015001000500
99
90
50
10
Cable scrape 1.0Kg
Percent
20001000500
99
90
50
10
1
Cable scrape 1.0Kg
Percent
3300300027002400
99
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
Gamma
A D = 0.369
P-V alue > 0.250
3-Parameter Gamma
A D = 0.271
P-V alue = *
Goodness of Fit Test
Smallest Extreme V alue
A D = 0.270
P-V alue > 0.250
Largest Extreme V alue
A D = 0.556
P-V alue = 0.152
Probability Plot for Cable scrape 1.0Kg
Smallest Extreme V alue - 95% C I Largest Extreme V alue - 95% C I
Gamma - 95% C I 3-Parameter Gamma - 95% C I
150010005000
99
90
50
10
1
Cable scrape 1.0Kg
Percent
20001000500
99
90
50
10
1
Cable scrape 1.0Kg
Percent
219000218500218000217500
99
90
50
10
1
Cable scrape 1.0Kg - T hreshold
Percent
3-Parameter Loglogistic
A D = 0.213
P-V alue = *
Goodness of Fit Test
Logistic
A D = 0.213
P-V alue > 0.250
Loglogistic
A D = 0.355
P-V alue > 0.250
Probability Plot for Cable scrape 1.0Kg
Logistic - 95% C I Loglogistic - 95% C I
3-Parameter Loglogistic - 95% C I
x
x
x
x x
x
x
x
x
Last visual inspection kept:
Normal
3 param. lognormal
2/3 param. Weibull
Smallest Extreme Val
3 param. gamma
Logistic
x
x
Distribution
ID
23. Goodness of Fit Test
Distribution AD P LRT P
Normal 0.212 0.833
Box-Cox Transformation 0.212 0.833
Lognormal 0.489 0.198
3-Parameter Lognormal 0.224 * 0.061
Exponential 5.590 <0.003
2-Parameter Exponential 3.064 <0.010 0.000
Weibull 0.205 >0.250
3-Parameter Weibull 0.202 >0.500 0.808
Smallest Extreme Value 0.270 >0.250
Largest Extreme Value 0.556 0.152
Gamma 0.369 >0.250
3-Parameter Gamma 0.271 * 0.229
Logistic 0.213 >0.250
Loglogistic 0.355 >0.250
3-Parameter Loglogistic 0.213 * 0.147
Likelihood ratio test = if small (<.05), 3 parameter fits better
Distribution
ID
26. Least Squares vs Maximum Likelihood
Least squares (LSXY)
• Better graphical display
• Better for small samples without censoring
Maximum likelihood (MLE)
• More precise than least squares (XY)
• Better for heavy censoring
• MLE allows you to perform an analysis when there
are no failures
Distribution ID via
Reliability/Survival
27. 12008004000
90
50
10
1
Cable scrape 1kg
Percent
12501000750500
99
90
50
10
1
Cable scrape 1kg
Percent
15001000500
99
90
50
10
1
Cable scrape 1kg
Percent
Smallest Extreme V alue
0.874
Normal
0.771
Logistic
0.738
A nderson-Darling (adj)
Probability Plot for Cable scrape 1kg
ML Estimates-Complete Data
Smallest Extreme Value Normal
Logistic
1000500
90
50
10
1
Cable scrape 1kg - Threshold
Percent
219900219600219300219000
99
90
50
10
1
Cable scrape 1kg - Threshold
Percent
100010010
90
50
10
1
Cable scrape 1kg - Threshold
Percent
218000217500217000
99
90
50
10
1
Cable scrape 1kg - Threshold
Percent
3-Parameter Weibull
0.771
3-Parameter Lognormal
0.771
2-Parameter Exponential
3.745
3-Parameter Loglogistic
0.738
A nderson-Darling (adj)
Probability Plot for Cable scrape 1kg
ML Estimates-Complete Data
3-Parameter Weibull 3-Parameter Lognormal
2-Parameter Exponential 3-Parameter Loglogistic
1000500
90
50
10
1
Cable scrape 1kg
Percent
1000500
99
90
50
10
1
Cable scrape 1kg
Percent
10000100010010
90
50
10
1
Cable scrape 1kg
Percent
1000500
99
90
50
10
1
Cable scrape 1kg
Percent
Weibull
0.771
Lognormal
0.932
Exponential
6.208
Loglogistic
0.750
A nderson-Darling (adj)
Probability Plot for Cable scrape 1kg
ML Estimates-Complete Data
Weibull Lognormal
Exponential Loglogistic
x x x
x
Distribution ID via
Reliability/Survival
28. Review the Warnings Again
3-Parameter Lognormal
* WARNING * Newton-Raphson algorithm has not converged after 50 iterations.
* WARNING * Convergence has not been reached for the parameter estimates
criterion.
2-Parameter Exponential
* WARNING * Variance/Covariance matrix of estimated parameters does not exist.
The threshold parameter is assumed fixed when calculating
confidence intervals.
3-Parameter Loglogistic
* WARNING * Newton-Raphson algorithm has not converged after 50 iterations.
* WARNING * Convergence has not been reached for the parameter estimates
criterion.
Still poor choices
Distribution ID via
Reliability/Survival
29. 12008004000
90
50
10
1
Cable scrape 1kg
Percent
12501000750500
99
90
50
10
1
Cable scrape 1kg
Percent
15001000500
99
90
50
10
1
Cable scrape 1kg
Percent
Smallest Extreme V alue
0.874
Normal
0.771
Logistic
0.738
A nderson-Darling (adj)
Probability Plot for Cable scrape 1kg
ML Estimates-Complete Data
Smallest Extreme Value Normal
Logistic
1000500
90
50
10
1
Cable scrape 1kg - Threshold
Percent
219900219600219300219000
99
90
50
10
1
Cable scrape 1kg - Threshold
Percent
100010010
90
50
10
1
Cable scrape 1kg - Threshold
Percent
218000217500217000
99
90
50
10
1
Cable scrape 1kg - Threshold
Percent
3-Parameter Weibull
0.771
3-Parameter Lognormal
0.771
2-Parameter Exponential
3.745
3-Parameter Loglogistic
0.738
A nderson-Darling (adj)
Probability Plot for Cable scrape 1kg
ML Estimates-Complete Data
3-Parameter Weibull 3-Parameter Lognormal
2-Parameter Exponential 3-Parameter Loglogistic
1000500
90
50
10
1
Cable scrape 1kg
Percent
1000500
99
90
50
10
1
Cable scrape 1kg
Percent
10000100010010
90
50
10
1
Cable scrape 1kg
Percent
1000500
99
90
50
10
1
Cable scrape 1kg
Percent
Weibull
0.771
Lognormal
0.932
Exponential
6.208
Loglogistic
0.750
A nderson-Darling (adj)
Probability Plot for Cable scrape 1kg
ML Estimates-Complete Data
Weibull Lognormal
Exponential Loglogistic
x x x
x x
x
Last visual inspection kept:
Weibull
3 parameter Weibull
Smallest extreme value
Logistic
Normal
x
Distribution ID via
Reliability/Survival
30. Anderson-Darling
Distribution (adj)
Weibull 0.771
Lognormal 0.932
Exponential 6.208
Loglogistic 0.750
3-Parameter Weibull 0.771
3-Parameter Lognormal 0.771
2-Parameter Exponential 3.745
3-Parameter Loglogistic 0.738
Smallest Extreme Value 0.874
Normal 0.771
Logistic 0.738
Note:
• Using Adjusted Anderson-Darling statistic*
• No P value provided
• no Likelihood ratio test
BUT! Can use censored data
*Even when the data are uncensored, the adjusted Anderson-Darling statistic will not
necessarily yield the same result as the non-adjusted Anderson-Darling statistic for small
samples. However, for large sample sizes, the disparity between the two approaches vanishes.
Distribution ID via
Reliability/Survival
31. Table of Percentiles
Standard 95% Normal CI
Distribution Percent Percentile Error Lower Upper
Weibull 1 376.505 68.5763 263.471 538.032
Lognormal 1 464.956 50.8249 375.289 576.047
Exponential 1 8.97312 1.91308 5.90836 13.6276
Loglogistic 1 445.882 60.9477 341.090 582.869
3-Parameter Weibull 1 373.325* 155.365 68.8161 677.835
3-Parameter Lognormal 1 393.301* 88.0037 220.817 565.785
2-Parameter Exponential 1 396.241 1.07483 394.140 398.353
3-Parameter Loglogistic 1 320.697 112.297 100.599 540.795
Smallest Extreme Value 1 123.762 162.753 -195.228 442.752
Normal 1 392.831 88.2110 219.940 565.721
Logistic 1 320.018 112.575 99.3745 540.661
Table of MTTF
Standard 95% Normal CI
Distribution Mean Error Lower Upper
Weibull 894.250 45.098 810.087 987.16
Lognormal 895.154 51.723 799.309 1002.49
Exponential 892.818 190.349 587.877 1355.94
Loglogistic 917.307 51.467 821.783 1023.94
3-Parameter Weibull 894.247* 45.156 805.742 982.75
3-Parameter Lognormal 892.885* 45.828 803.065 982.71
2-Parameter Exponential 892.816 106.945 705.995 1129.07
3-Parameter Loglogistic 898.124 47.193 805.628 990.62
Smallest Extreme Value 887.984 51.629 786.794 989.18
Normal 892.818 45.822 803.009 982.63
Logistic 898.053 47.191 805.560 990.55
Other Output:
*note: the 3rd parameter (threshold) is not commonly accurate
for sample sizes below 10, 20 or greater is recommended
Distribution ID via
Reliability/Survival
33. Which distribution ID to use?
• Recommend using Stat > Quality Tools > Individual
Distribution Identification, Since it includes P values
and Maximum Likelihood Ratio
• Unless you have censored data
Distribution
ID
35. Testing to Failure -
Remember censoring?
• Some tests will not be able to run to the end due to
time constraints
= Right censored data or suspended data
• Some tests cannot be continuously monitored
= Arbitrary censoring
36. Testing to Failure
• Based on MIL DTL -915G
• Checking for continuity on 10-20
stations
• Failure mode – open on any
conductor
• Each unit is independently measured
every millisecond and cycles are
recorded
Cable Flex -Right Censored
37. cycles susp qty
218457 s 3
78014 n 1
124859 n 1
36657 n 1
109032 n 1
58111 n 1
169316 n 1
204050 n 1
Testing to Failure
Cable Flex -Right Censored
38. Testing to Failure
Cable Flex -Right Censored
cycles susp qty
218457 s 3
78014 n 1
124859 n 1
36657 n 1
109032 n 1
58111 n 1
169316 n 1
204050 n 1
40. Least Squares vs Maximum Likelihood
Least squares (LSXY)
• Better graphical display
• Better for small samples without censoring
Maximum likelihood (MLE)
• More precise than least squares (XY)
• Better for heavy censoring
• MLE allows you to perform an analysis when there
are no failures
Testing to Failure
41. 5000004000003000002000001000000
100
80
60
40
20
0
Shape 1 .621 35
Scale 1 91 31 4
Mean 1 71 324
StDev 1 0831 1
Median 1 52607
IQR 1 45293
Failure 7
Censor 3
AD* 22.41 6
Table of Statistics
cycles
Percent
Cumulative Failure Plot for cycles
Censoring Column in susp - ML Estimates
Weibull - 95% CI
Testing to Failure
Cable Flex -Right Censored
42. Parameter Estimates
Standard 95.0% Normal CI
Parameter Estimate Error Lower Upper
Shape 1.62135 0.533550 0.850680 3.09021
Scale 191314 44798.7 120900 302738
Log-Likelihood = -91.724
Goodness-of-Fit
Anderson-Darling (adjusted) = 22.416
Characteristics of Distribution
Standard 95.0% Normal CI
Estimate Error Lower Upper
Mean(MTTF) 171324 40868.3 107342 273444
Standard Deviation 108311 45660.1 47406.2 247462
Median 152607 36459.0 95546.4 243743
First Quartile(Q1) 88719.9 29101.9 46645.5 168746
Third Quartile(Q3) 234013 58343.0 143556 381467
Interquartile Range(IQR) 145293 53304.7 70787.5 298216
Testing to Failure
Cable Flex -Right Censored
45. Drop on the
connector
10 drops at 40” then
10 drops at 50” then
10 drops at 60” etc.
until failure
Testing to Failure
Stepped Drop Test - Arbitrary Censoring
46. Tabulated Data
Number of
drops before
failure @ 40
inch(if <10)
Number of
drops before
failure @ 50
inch(if <10)
Number of
drops before
failure @ 60
inch(if <10)
Total
inches
Passed
Total
inches
Failed
1 10 10 6 1260 1320
2 10 4 600 650
3 10 10 2 1020 1080
4 10 8 800 850
5 10 7 750 800
6 10 10 2 1020 1080
Testing to Failure
Stepped Drop Test - Arbitrary Censoring
47. Testing to Failure
Stepped Drop Test - Arbitrary Censoring
Total
inches
Passed
Total inches
Failed
1260 1320
600 650
1020 1080
800 850
750 800
1020 1080
48. Testing to Failure
Stepped Drop Test - Arbitrary Censoring
Total
inches
Passed
Total inches
Failed
1260 1320
600 650
1020 1080
800 850
750 800
1020 1080
49. 2000
1500
1000
900
800
700
600
500
400
300
200
99
90
80
70
60
50
40
30
20
10
5
3
2
1
Total inches Passed
Percent
Shape 4.76974
Scale 1022.65
Mean 936.407
StDev 223.914
Median 947.015
IQ R 307.573
A D* 2.852
Table of Statistics
Probability Plot for Total inches Passed
Arbitrary Censoring - ML Estimates
Weibull - 95% CI
150012501000750500
100
80
60
40
20
0
Total inches Passed
Percent
Shape 4.76974
Scale 1022.65
Mean 936.407
StDev 223.914
Median 947.015
IQ R 307.573
A D* 2.852
Table of Statistics
Cumulative Failure Plot for Total inches Passed
Arbitrary Censoring - ML Estimates
Weibull - 95% CI
Testing to Failure
Stepped Drop Test - Arbitrary Censoring
50. How Is It Different?
2000
1500
1000
900
800
700
600
500
400
300
200
99
90
80
70
60
50
40
30
20
10
5
3
2
1
Total inches Passed
Percent
Shape 4.76974
Scale 1022.65
Mean 936.407
StDev 223.914
Median 947.015
IQ R 307.573
A D* 2.852
Table of Statistics
Probability Plot for Total inches Passed
Arbitrary Censoring - ML Estimates
Weibull - 95% CI
2000
1500
1000
900
800
700
600
500
400
300
200
99
90
80
70
60
50
40
30
20
10
5
3
2
1
Data
Percent
4.66343 994.17 2.174 6 0
4.85171 1051.86 2.178 6 0
4.75849 1023.02 2.176 6 0
Shape Scale A D* F C
Table of Statistics
Total inches Passed
Total inches Failed
average
Variable
Probability Plot for Total inches Passed, Total inches Failed, average
Complete Data - ML Estimates
Weibull - 95% CI
In this case, fairly close to
the average.
But with wider intervals it
becomes very different.
Testing to Failure
Arbitrary vs Right censoring
52. Acceleration Factors
• Reliability engineers attempt to speed up testing by
“accelerating” the test
• Environmental:
• Increased temperature
• Increased temperature swing
• Adding environmental factors together
• Sequentially testing
• Mechanical
• Increased speed
• Increased weight
• Others?
• Multiple factors
53. Wire Scrape Test
Back to:
• This test can use different
weights, what effect does that
have?
• In this case, we used 4 weights:
0.85 kg, 1 kg, 1.1 kg, 1.25 kg
Acceleration
Factors
63. Acceleration Factors
• Reliability engineers attempt to speed up testing by
“accelerating” the test
• Environmental:
• Increased temperature
• Increased temperature swing
• Adding environmental factors together
• Sequentially testing
• Mechanical
• Increased speed
• Increased weight
• Others?
• Multiple factors
64. Cable Flex
• Checking for continuity on 10-20
stations
• Failure mode – open on any
conductor
• Varying Mandrel
• Varying Weight
Multiple Acceleration Factors
Back to:
Wt
65. cycles susp weight (kg) mandrel (mm) qty
942 n 0.5 4 1
735 n 0.5 4 1
1289 n 0.5 4 1
1487 n 0.5 4 1
1222 n 0.5 4 1
2750 n 0.25 4 1
3074 n 0.25 4 1
2547 n 0.25 4 1
2992 n 0.25 4 1
4338 n 0.25 4 1
1954 n 0.25 4 1
1845 n 0.4 4 1
867 n 0.4 4 1
1004 n 0.4 4 1
1182 n 0.4 4 1
3451 n 0.4 4 1
2715 n 0.4 4 1
1071 n 0.5 4 1
831 n 0.5 4 1
1114 n 0.5 4 1
1159 n 0.5 4 1
934 n 0.5 4 1
1130 n 0.5 4 1
1150 n 0.5 4 1
1115 n 0.5 4 1
1560 n 0.5 4 1
1465 n 0.5 4 1
939 n 0.5 4 1
1343 n 0.5 4 1
1331 n 0.5 4 1
1329 n 0.5 4 1
2264 n 0.5 4 1
17701 n 1 25.4 1
12741 n 1 25.4 1
12805 n 1 25.4 1
14467 n 1 25.4 1
17301 n 1 25.4 1
18815 n 1 25.4 1
21800 n 0.5 25.4 1
19710 n 0.5 25.4 1
29224 n 0.5 25.4 1
35348 n 0.5 25.4 1
24332 n 0.5 25.4 1
24865 n 0.5 25.4 1
33948 n 0.5 25.4 1
33944 n 0.25 25.4 1
29157 n 0.25 25.4 1
32391 n 0.25 25.4 1
34150 n 0.25 25.4 1
31960 n 0.25 25.4 1
29278 n 0.25 25.4 1
70000 s 0.25 25.4 1
Cable Flex
Multiple Acceleration Factors
69. 710.6
Acceleration factor in a traditional sense
Prediction = exp [7.66490 -1.65024 * (weight) +0.138599 (Mandrel-mm)+ (1.0/2.89069)*(-0.3665)]
Weight (kg)
Mandrel
(mm)
Prediction
(cycle)
Acceleration factor
per 1/2 Kg
Acceleration factor per
5mm mandrel increase
0.5 5.0 1649.0
0.5 10.0 3304.0
0.5 15.0 6620.0
1.0 5.0 722.6
1.0 10.0 1447.8
1.0 15.0 2900.7
1621.8
= 2.28
point
estimates
Confidence intervals removed for clarity
Cable Flex
Multiple Acceleration Factors
70. 3304.0
Acceleration factor in a traditional sense
Prediction = exp [7.66490 -1.65024 * (weight) +0.138599 (Mandrel-mm)+ (1.0/2.89069)*(-0.3665)]
Weight (kg)
Mandrel
(mm)
Prediction
(cycle)
Acceleration factor
per 1/2 Kg
Acceleration factor per
5mm mandrel increase
0.5 5.0 1649.0 2.28
0.5 10.0 3304.0 2.28
0.5 15.0 6620.0 2.28
1.0 5.0 722.6 …
1.0 10.0 1447.8 …
1.0 15.0 2900.7 …
1649.0
= 2.00
point
estimates
Confidence intervals removed for clarity
Cable Flex
Multiple Acceleration Factors
71. Acceleration factor in a traditional sense
Prediction = exp [7.66490 -1.65024 * (weight) +0.138599 (Mandrel-mm)+ (1.0/2.89069)*(-0.3665)]
point
estimates
Confidence intervals removed for clarity
Weight (kg)
Mandrel
(mm)
Prediction
(cycle)
Acceleration factor
per 1/2 Kg
Acceleration factor per
5mm mandrel increase
0.5 5.0 1649.0 2.28 2.00
0.5 10.0 3304.0 2.28 2.00
0.5 15.0 6620.0 2.28 …
1.0 5.0 722.6 … 2.00
1.0 10.0 1447.8 … 2.00
1.0 15.0 2900.7 … …
Cable Flex
Multiple Acceleration Factors
72. Summary
• Distribution selection can be assisted with
statistics, but knowledge about your data is key
• Use Right Censoring if an exact count is available,
Arbitrary Censoring is for periodic testing
• Accelerated testing can provide faster answers and
testing at different levels can assist in
understanding the relationship for the acceleration
73. Author
Rob Schubert
Corporate Quality/Reliability Engineer
Shure Inc – 9 years
Schubert_Rob@shure.com
Certified Reliability Engineer (ASQ)
Master’s in Acoustical Engineering, Penn State
Thesis: Use of Multiple-Input Single-Output Methods to Increase
Repeatability of Measurements of Road Noise in Automobiles
Recent presentations:
2 Parameter vs. 3 Parameter Weibull with a Cable Flex Test – ARS 2015
Previous work experience: Ford (13 years) - Quality/Reliability Engineer, 6
Sigma Black belt, Noise & Vibration Engineer
74. Thank you for your attention.
Do you have any questions?