This document discusses the differences between individual reliability (IndRel) and population reliability (PopRel) for aging systems. IndRel provides the reliability of a single system at a given age, while PopRel provides the probability that a randomly selected system from a population will work at a given time, taking into account the age distribution of systems in the population. The document outlines methods to estimate both IndRel and PopRel, including using Weibull and probit models on failure data. Examples are provided to demonstrate estimating IndRel and PopRel for projects using different statistical models and failure data.
Research is defined as a systematic investigation designed to develop or contribute to generalizable knowledge. It involves carefully defining problems, formulating hypotheses, collecting and organizing data, making deductions, reaching conclusions, and testing conclusions. The main objectives of research are to gain familiarity with phenomena, accurately portray characteristics, determine frequencies of occurrences, and test hypotheses of causal relationships between variables. In conclusion, research is a systematic and logical process that follows specified steps in a specified sequence according to a set of rules.
Scholarly Communications at a National Research Lab: Approaches to Research a...Dee Magnoni
Ā
Presentation takes a 360 degree look at open scholarly communication at Los Alamos National Laboratory (LANL), identifying key stakeholders and their roles in the success of Lab research. Approaches and activity to date are discussed, including cross-Lab collaboration, policy creation and implementation, service and tool development, and our work on international research challenges such as link rot.
Toxicology is in the midst of transformation, moving towards a data rich quantitative future. As we move towards a Biological Big Data reality, investigators and risk assessors need computational tools that can help them make decisions. In this slide deck I lay out some of my vision for using ontologies to drive predictive toxicology using machine intelligence.
How to Mitigate Fatigue in Heavy Industry - Fatigue Science Webinar | May 2019Larissa Cox
Ā
In this webinar, Rob Higdon, VP of Product at Fatigue Science, will go through how to manage fatigue risks in heavy industry through validated science and new technology.
This document describes a study that uses machine learning algorithms to analyze flood data and predict flood impacts. The study collected flood data from various states in India containing information on start/end dates, duration, causes, affected districts/states, and casualties including human injuries and deaths as well as animal fatalities. Various machine learning models like decision trees, random forests, SVMs, and neural networks were trained on the data. The models' performance was evaluated based on metrics like accuracy, precision, recall, and F1-score. The results showed that some states experienced higher numbers of human/animal casualties from floods compared to others. Graphs and charts were used to analyze relationships between variables in the data and compare flood impacts like casualties and
Extensions built on the PLSim tool can evaluate IoT systems and how control logic impacts energy usage. PLSim is a Python tool that models plug load energy usage through customizable usage schedules and a device library. It allows testing scenarios to determine a device's energy consumption range and evaluate how usage patterns affect it. The tool simplifies estimating energy use through inputting usage data and outputs total energy used and power consumption over time for analyzed configurations.
120 years ago the emergent field of experimental psychology became embroiled in debates as to whether plateaus in performance are real (or not) and if so whether they were due to periods in which league-stepping methods (originally defined as a hierarchy of habits that enabled experts to step leagues while novices were ``bustling over furlongs or inches'') were being acquired (or not). 20 years ago both the human-computer interaction and cognitive science communities were seized with concerns over performance plateaus (i.e., extended periods of stable suboptimal performance) from experts. I briefly review this history with the aim of drawing distinctions between performance asymptotes and performance plateaus, and argue that remediating one is the domain of design while remediating the other is the domain of training.
The Evolution of Disaster Early Warning Systems in the TRIDEC ProjectPeter Lƶwe
Ā
The document summarizes the TRIDEC project which aims to develop new approaches and technologies for intelligent information management in collaborative, complex decision processes for natural crisis management. It describes the evolution of tsunami early warning systems from lightweight to heavyweight demonstrators. The key components of the TRIDEC architecture include an interoperable communication infrastructure, a robust service infrastructure, and a knowledge-based service framework to support tasks like sensor data integration, information dissemination, and collaborative decision making. The project aims to develop standards-based software architectures to connect local tsunami early warning systems into a system of systems.
Research is defined as a systematic investigation designed to develop or contribute to generalizable knowledge. It involves carefully defining problems, formulating hypotheses, collecting and organizing data, making deductions, reaching conclusions, and testing conclusions. The main objectives of research are to gain familiarity with phenomena, accurately portray characteristics, determine frequencies of occurrences, and test hypotheses of causal relationships between variables. In conclusion, research is a systematic and logical process that follows specified steps in a specified sequence according to a set of rules.
Scholarly Communications at a National Research Lab: Approaches to Research a...Dee Magnoni
Ā
Presentation takes a 360 degree look at open scholarly communication at Los Alamos National Laboratory (LANL), identifying key stakeholders and their roles in the success of Lab research. Approaches and activity to date are discussed, including cross-Lab collaboration, policy creation and implementation, service and tool development, and our work on international research challenges such as link rot.
Toxicology is in the midst of transformation, moving towards a data rich quantitative future. As we move towards a Biological Big Data reality, investigators and risk assessors need computational tools that can help them make decisions. In this slide deck I lay out some of my vision for using ontologies to drive predictive toxicology using machine intelligence.
How to Mitigate Fatigue in Heavy Industry - Fatigue Science Webinar | May 2019Larissa Cox
Ā
In this webinar, Rob Higdon, VP of Product at Fatigue Science, will go through how to manage fatigue risks in heavy industry through validated science and new technology.
This document describes a study that uses machine learning algorithms to analyze flood data and predict flood impacts. The study collected flood data from various states in India containing information on start/end dates, duration, causes, affected districts/states, and casualties including human injuries and deaths as well as animal fatalities. Various machine learning models like decision trees, random forests, SVMs, and neural networks were trained on the data. The models' performance was evaluated based on metrics like accuracy, precision, recall, and F1-score. The results showed that some states experienced higher numbers of human/animal casualties from floods compared to others. Graphs and charts were used to analyze relationships between variables in the data and compare flood impacts like casualties and
Extensions built on the PLSim tool can evaluate IoT systems and how control logic impacts energy usage. PLSim is a Python tool that models plug load energy usage through customizable usage schedules and a device library. It allows testing scenarios to determine a device's energy consumption range and evaluate how usage patterns affect it. The tool simplifies estimating energy use through inputting usage data and outputs total energy used and power consumption over time for analyzed configurations.
120 years ago the emergent field of experimental psychology became embroiled in debates as to whether plateaus in performance are real (or not) and if so whether they were due to periods in which league-stepping methods (originally defined as a hierarchy of habits that enabled experts to step leagues while novices were ``bustling over furlongs or inches'') were being acquired (or not). 20 years ago both the human-computer interaction and cognitive science communities were seized with concerns over performance plateaus (i.e., extended periods of stable suboptimal performance) from experts. I briefly review this history with the aim of drawing distinctions between performance asymptotes and performance plateaus, and argue that remediating one is the domain of design while remediating the other is the domain of training.
The Evolution of Disaster Early Warning Systems in the TRIDEC ProjectPeter Lƶwe
Ā
The document summarizes the TRIDEC project which aims to develop new approaches and technologies for intelligent information management in collaborative, complex decision processes for natural crisis management. It describes the evolution of tsunami early warning systems from lightweight to heavyweight demonstrators. The key components of the TRIDEC architecture include an interoperable communication infrastructure, a robust service infrastructure, and a knowledge-based service framework to support tasks like sensor data integration, information dissemination, and collaborative decision making. The project aims to develop standards-based software architectures to connect local tsunami early warning systems into a system of systems.
Ecosystem science requirements for uas remote sensing bensparrowau
Ā
This document discusses opportunities for using unmanned aerial systems (UAS) remote sensing in ecology. It notes that ecologists are interested in distribution, abundance, interactions, composition, function and structure of organisms. UAS provide opportunities to meet ecologists' needs through rapid revisit times to capture events as they happen, more frequent surveillance monitoring, information on patterns, microtopography and vegetation structure at different scales. UAS data can also be upscaled and provide future-proof information as techniques advance. There are currently problems but also a strong need, and addressing needs creates opportunities to make a positive impact.
This document provides an overview of machine learning in Splunk. It begins with an introduction to machine learning concepts like supervised and unsupervised learning. It then discusses how machine learning is integrated into Splunk's platform through features like the Machine Learning Toolkit, pre-built machine learning algorithms, and ML commands in the SPL search language. The document concludes with guidance on how to successfully build machine learning applications with Splunk, including collecting data, exploring and modeling data, and operationalizing models.
Using AI Planning to Automate the Performance Analysis of SimulatorsRoland Ewald
Ā
Analyzing simulation algorithm performance is cumbersome: execute some runs, observe a performance metric, and analyze the results. Often, the results motivate follow-up experiments, which in turn may lead to additional experiments, and so on. This time-consuming and error-prone process can be automated with planning approaches from artificial intelligence, making simulator performance analysis more convenient and rigorous. This paper introduces ALeSiA, a prototypical system for automatic simulator performance analysis. It is independent of any specific simulation system and realizes a hypothesis-driven approach to evaluate performance.
Panel talk given at the SC'12 Birds of a Feather session entitled, "Cool Supercomputing: Achieving Energy Efficiency at the Extreme Scales". Salt Lake City, Utah, 14 November 2012.
This document provides an overview of the System Advisor Model (SAM) developed by the National Renewable Energy Laboratory. SAM is a free software tool that models the performance and financial metrics of renewable energy projects. It includes detailed performance models for photovoltaics, solar thermal, wind, and other technologies. SAM also contains financial models for residential, commercial, and utility-scale projects. Users can customize inputs, run simulations, and view results in tables and graphs to evaluate technology and financing options. SAM is widely used to assess renewable energy projects and policy.
This document summarizes a presentation on genomics and big data in precision medicine. It discusses how next generation sequencing is generating massive amounts of multi-omics data from the genome, epigenome, transcriptome, proteome and metagenome. It describes some of the algorithms and databases used to analyze this big genomic and biological data, including de Bruijn graph algorithms and databases like NCBI, OMIM, and PANTHER. It also discusses some of the challenges in analyzing such large and complex biological data using computational methods.
Automated Software Enging, Fall 2015, NCSUCS, NcState
Ā
This document discusses automated software engineering and the use of models in software engineering. It notes that models are now central tools in scientific research across many fields for simulating complex systems. Examples of areas where models are used include physics, biology, emergency response systems, border security, and analyzing stock market crashes. The document then discusses how optimization techniques like genetic algorithms and tabu search are used in automated software engineering to help find near-optimal or good-enough solutions for challenging problems due to computational complexity. Examples where optimization has been applied in software engineering include requirements engineering, program repair, and software product lines.
This document discusses classifying tornadoes based on their likelihood of causing death or injury. It begins by describing the dataset and features, which include characteristics of past tornadoes. Feature selection is performed to identify subsets without multicollinearity. Several machine learning models are then used for classification, including logistic regression, discriminant analysis, KNN, random forests, support vector machines, and neural networks. The models are evaluated based on metrics like ROC curves and AUC to determine which best predicts whether a tornado will cause harm. Limitations around class imbalance are also discussed.
The Rise in Multiple Births in the U.S.: An Analysis of a Hundred-Million Bir...Revolution Analytics
Ā
Presentation by Sue Ranney of Revolution Analytics at JSM 2012, San Diego CA, Aug 1 2012.
The Center for Disease Control and Prevention recently issued a report, widely cited in the popular press, on the increased incidence of multiple births in the United States over the last 30 years. Twin birth rates were extracted from annual birth data by a variety of mother's characteristics in order to examine this trend. Our research extends this analysis by applying multivariate analysis to individual-level data obtained from public-use data sets on all births in the United States from 1985 to 2009. We combine the data into a single, multi-year data file (an .xdf file easily accessed by R) containing over 100-million birth records. To analyze the relationship between parental characteristics and multiple birth pregnancies, we first change the unit of observation from the baby to the pregnancy in order to remove replicated observations of parents of multiples. Then, estimating a logistic regression on all of the remaining observations, we show that the trends in increased multiple births are more strongly associated with the age of father than the age of mother, and that controlling for ages, the relative incidence of multiple births for black mothers has been declining.
2013.11.14 Big Data Workshop Adam Ralph - 1st set of slidesNUI Galway
Ā
Adam Ralph from the Irish Centre for High End Computing presented this Introduction to Basic R during the Big Data Workshop hosted by the Social Sciences Computing Hub at the Whitaker Institute on the 14th November 2013
Detecting Fatigue Driving Through PERCLOS: A ReviewCSCJournals
Ā
In this paper, we present a literature survey about drowsy driving detection using PERCLOS metric that determines the percentage of eye closure. This metric determines that an eye is closed if the percentage of eye closure is 80% or above. When this percentage is observed for multiple frames of a video camera feed, the driver is determined to be in an unsafe fatigue status. In our research, we found that the PERCLOS metric had a 0.79 to 0.87 correlation coefficient value which exceeds the 0.7 R value needed to be considered a strong correlation coefficient. A higher value than 0.7 indicates a more linear relationship which means that the metric is dependable [1].
Modelling Food Systems as Neural NetworksIFPRI Africa
Ā
This document discusses modeling food systems as neural networks. It begins by providing context around global food security goals. The authors then define food systems and discuss challenges in modeling them due to their complex, nonlinear nature. They propose using artificial neural networks, which can model these complex systems. Examples of neural networks being applied to agriculture are provided. The authors then describe their model using US county-level food trade and security data. Their neural network achieved better accuracy than other models. Interpretation of the results found some variables had significant impacts on food insecurity. The authors conclude neural networks show promise but need improved interpretation and additional data to better inform policy.
IRJET- Elderly Care-Taking and Fall Detection SystemIRJET Journal
Ā
This document summarizes an elderly care and fall detection system presented in the International Research Journal of Engineering and Technology. The system uses wearable accelerometer sensors and a Raspberry Pi to detect falls in elderly individuals. It also includes a medication reminder system. The system was trained using an artificial neural network algorithm on fall data collected from accelerometers. It achieved 98% accuracy in detecting four types of falls: front, back, left, and right. The system aims to promptly detect falls in elderly to reduce injuries and notify caregivers in emergency situations. It seeks to improve elderly independent living by monitoring medication intake and detecting falls.
IRJET- A Survey on Vision based Fall Detection TechniquesIRJET Journal
Ā
This document reviews different vision-based fall detection systems that have been developed using computer vision and image processing techniques. It discusses how vision-based systems work by capturing images or videos using cameras and then analyzing the footage using algorithms to classify events as falls or non-falls. The document also examines some of the challenges of vision-based approaches, such as effects of lighting and background objects, and how newer techniques like convolutional neural networks have helped improve accuracy of fall detection.
IRJET - Prediction of Autistic Spectrum Disorder based on Behavioural Fea...IRJET Journal
Ā
This document summarizes a research paper that aims to predict autism spectrum disorder (ASD) based on behavioral features using machine learning. The researchers collected ASD screening data from different age groups to develop and evaluate neural network models for predicting ASD. They achieved up to 90% accuracy in predicting ASD. The researchers concluded that machine learning is a promising approach for ASD prediction but noted limitations like lack of large datasets. They plan to improve the models by collecting more data from various sources.
A Comparison Of Fitness Scallng Methods In Evolutionary AlgorithmsTracy Hill
Ā
This document studies the performance of two selection mechanisms - stochastic universal sampling and proportional selection - in genetic algorithms. It discusses experimental results comparing the two methods when optimizing highly multimodal and unimodal test functions. The results indicate that stochastic universal sampling produces individuals of better quality compared to proportional selection alone. Stochastic universal sampling achieved the best average error rates, coming closer to the known optimal values for the test functions.
This document discusses duty cycle concepts in reliability engineering. It begins with definitions of time-based and stress-condition-based duty cycles. Time-based duty cycle is the proportion of time a system is active, while stress-condition-based duty cycle considers the level of stress applied. The document then discusses how duty cycle manifests differently across various industries and how it is used to calculate reliability, with duty cycle affecting mission time, failure mechanisms, and characteristic life. Examples are provided for hard disk drives to illustrate the effects of duty cycle on acceleration factors and mean time to failure.
The document discusses potential issues with using MTBF/MTTF as the primary reliability metric for the defense and aerospace industries. It argues that MTBF/MTTF provides an incomplete view of reliability across the entire product lifecycle and can result in overly optimistic assessments. The document proposes using an alternative metric called Bx/Lx, which specifies the life point where no more than a certain percentage (like 10%) of failures have occurred. This provides a more comprehensive view of reliability focused on early failures. Overall, the document advocates updating reliability metrics and practices to better reflect physical failure mechanisms.
More Related Content
Similar to Comparing Individual Reliability to Population Reliability for Aging Systems
Ecosystem science requirements for uas remote sensing bensparrowau
Ā
This document discusses opportunities for using unmanned aerial systems (UAS) remote sensing in ecology. It notes that ecologists are interested in distribution, abundance, interactions, composition, function and structure of organisms. UAS provide opportunities to meet ecologists' needs through rapid revisit times to capture events as they happen, more frequent surveillance monitoring, information on patterns, microtopography and vegetation structure at different scales. UAS data can also be upscaled and provide future-proof information as techniques advance. There are currently problems but also a strong need, and addressing needs creates opportunities to make a positive impact.
This document provides an overview of machine learning in Splunk. It begins with an introduction to machine learning concepts like supervised and unsupervised learning. It then discusses how machine learning is integrated into Splunk's platform through features like the Machine Learning Toolkit, pre-built machine learning algorithms, and ML commands in the SPL search language. The document concludes with guidance on how to successfully build machine learning applications with Splunk, including collecting data, exploring and modeling data, and operationalizing models.
Using AI Planning to Automate the Performance Analysis of SimulatorsRoland Ewald
Ā
Analyzing simulation algorithm performance is cumbersome: execute some runs, observe a performance metric, and analyze the results. Often, the results motivate follow-up experiments, which in turn may lead to additional experiments, and so on. This time-consuming and error-prone process can be automated with planning approaches from artificial intelligence, making simulator performance analysis more convenient and rigorous. This paper introduces ALeSiA, a prototypical system for automatic simulator performance analysis. It is independent of any specific simulation system and realizes a hypothesis-driven approach to evaluate performance.
Panel talk given at the SC'12 Birds of a Feather session entitled, "Cool Supercomputing: Achieving Energy Efficiency at the Extreme Scales". Salt Lake City, Utah, 14 November 2012.
This document provides an overview of the System Advisor Model (SAM) developed by the National Renewable Energy Laboratory. SAM is a free software tool that models the performance and financial metrics of renewable energy projects. It includes detailed performance models for photovoltaics, solar thermal, wind, and other technologies. SAM also contains financial models for residential, commercial, and utility-scale projects. Users can customize inputs, run simulations, and view results in tables and graphs to evaluate technology and financing options. SAM is widely used to assess renewable energy projects and policy.
This document summarizes a presentation on genomics and big data in precision medicine. It discusses how next generation sequencing is generating massive amounts of multi-omics data from the genome, epigenome, transcriptome, proteome and metagenome. It describes some of the algorithms and databases used to analyze this big genomic and biological data, including de Bruijn graph algorithms and databases like NCBI, OMIM, and PANTHER. It also discusses some of the challenges in analyzing such large and complex biological data using computational methods.
Automated Software Enging, Fall 2015, NCSUCS, NcState
Ā
This document discusses automated software engineering and the use of models in software engineering. It notes that models are now central tools in scientific research across many fields for simulating complex systems. Examples of areas where models are used include physics, biology, emergency response systems, border security, and analyzing stock market crashes. The document then discusses how optimization techniques like genetic algorithms and tabu search are used in automated software engineering to help find near-optimal or good-enough solutions for challenging problems due to computational complexity. Examples where optimization has been applied in software engineering include requirements engineering, program repair, and software product lines.
This document discusses classifying tornadoes based on their likelihood of causing death or injury. It begins by describing the dataset and features, which include characteristics of past tornadoes. Feature selection is performed to identify subsets without multicollinearity. Several machine learning models are then used for classification, including logistic regression, discriminant analysis, KNN, random forests, support vector machines, and neural networks. The models are evaluated based on metrics like ROC curves and AUC to determine which best predicts whether a tornado will cause harm. Limitations around class imbalance are also discussed.
The Rise in Multiple Births in the U.S.: An Analysis of a Hundred-Million Bir...Revolution Analytics
Ā
Presentation by Sue Ranney of Revolution Analytics at JSM 2012, San Diego CA, Aug 1 2012.
The Center for Disease Control and Prevention recently issued a report, widely cited in the popular press, on the increased incidence of multiple births in the United States over the last 30 years. Twin birth rates were extracted from annual birth data by a variety of mother's characteristics in order to examine this trend. Our research extends this analysis by applying multivariate analysis to individual-level data obtained from public-use data sets on all births in the United States from 1985 to 2009. We combine the data into a single, multi-year data file (an .xdf file easily accessed by R) containing over 100-million birth records. To analyze the relationship between parental characteristics and multiple birth pregnancies, we first change the unit of observation from the baby to the pregnancy in order to remove replicated observations of parents of multiples. Then, estimating a logistic regression on all of the remaining observations, we show that the trends in increased multiple births are more strongly associated with the age of father than the age of mother, and that controlling for ages, the relative incidence of multiple births for black mothers has been declining.
2013.11.14 Big Data Workshop Adam Ralph - 1st set of slidesNUI Galway
Ā
Adam Ralph from the Irish Centre for High End Computing presented this Introduction to Basic R during the Big Data Workshop hosted by the Social Sciences Computing Hub at the Whitaker Institute on the 14th November 2013
Detecting Fatigue Driving Through PERCLOS: A ReviewCSCJournals
Ā
In this paper, we present a literature survey about drowsy driving detection using PERCLOS metric that determines the percentage of eye closure. This metric determines that an eye is closed if the percentage of eye closure is 80% or above. When this percentage is observed for multiple frames of a video camera feed, the driver is determined to be in an unsafe fatigue status. In our research, we found that the PERCLOS metric had a 0.79 to 0.87 correlation coefficient value which exceeds the 0.7 R value needed to be considered a strong correlation coefficient. A higher value than 0.7 indicates a more linear relationship which means that the metric is dependable [1].
Modelling Food Systems as Neural NetworksIFPRI Africa
Ā
This document discusses modeling food systems as neural networks. It begins by providing context around global food security goals. The authors then define food systems and discuss challenges in modeling them due to their complex, nonlinear nature. They propose using artificial neural networks, which can model these complex systems. Examples of neural networks being applied to agriculture are provided. The authors then describe their model using US county-level food trade and security data. Their neural network achieved better accuracy than other models. Interpretation of the results found some variables had significant impacts on food insecurity. The authors conclude neural networks show promise but need improved interpretation and additional data to better inform policy.
IRJET- Elderly Care-Taking and Fall Detection SystemIRJET Journal
Ā
This document summarizes an elderly care and fall detection system presented in the International Research Journal of Engineering and Technology. The system uses wearable accelerometer sensors and a Raspberry Pi to detect falls in elderly individuals. It also includes a medication reminder system. The system was trained using an artificial neural network algorithm on fall data collected from accelerometers. It achieved 98% accuracy in detecting four types of falls: front, back, left, and right. The system aims to promptly detect falls in elderly to reduce injuries and notify caregivers in emergency situations. It seeks to improve elderly independent living by monitoring medication intake and detecting falls.
IRJET- A Survey on Vision based Fall Detection TechniquesIRJET Journal
Ā
This document reviews different vision-based fall detection systems that have been developed using computer vision and image processing techniques. It discusses how vision-based systems work by capturing images or videos using cameras and then analyzing the footage using algorithms to classify events as falls or non-falls. The document also examines some of the challenges of vision-based approaches, such as effects of lighting and background objects, and how newer techniques like convolutional neural networks have helped improve accuracy of fall detection.
IRJET - Prediction of Autistic Spectrum Disorder based on Behavioural Fea...IRJET Journal
Ā
This document summarizes a research paper that aims to predict autism spectrum disorder (ASD) based on behavioral features using machine learning. The researchers collected ASD screening data from different age groups to develop and evaluate neural network models for predicting ASD. They achieved up to 90% accuracy in predicting ASD. The researchers concluded that machine learning is a promising approach for ASD prediction but noted limitations like lack of large datasets. They plan to improve the models by collecting more data from various sources.
A Comparison Of Fitness Scallng Methods In Evolutionary AlgorithmsTracy Hill
Ā
This document studies the performance of two selection mechanisms - stochastic universal sampling and proportional selection - in genetic algorithms. It discusses experimental results comparing the two methods when optimizing highly multimodal and unimodal test functions. The results indicate that stochastic universal sampling produces individuals of better quality compared to proportional selection alone. Stochastic universal sampling achieved the best average error rates, coming closer to the known optimal values for the test functions.
Similar to Comparing Individual Reliability to Population Reliability for Aging Systems (20)
This document discusses duty cycle concepts in reliability engineering. It begins with definitions of time-based and stress-condition-based duty cycles. Time-based duty cycle is the proportion of time a system is active, while stress-condition-based duty cycle considers the level of stress applied. The document then discusses how duty cycle manifests differently across various industries and how it is used to calculate reliability, with duty cycle affecting mission time, failure mechanisms, and characteristic life. Examples are provided for hard disk drives to illustrate the effects of duty cycle on acceleration factors and mean time to failure.
The document discusses potential issues with using MTBF/MTTF as the primary reliability metric for the defense and aerospace industries. It argues that MTBF/MTTF provides an incomplete view of reliability across the entire product lifecycle and can result in overly optimistic assessments. The document proposes using an alternative metric called Bx/Lx, which specifies the life point where no more than a certain percentage (like 10%) of failures have occurred. This provides a more comprehensive view of reliability focused on early failures. Overall, the document advocates updating reliability metrics and practices to better reflect physical failure mechanisms.
This document provides an overview of a talk on thermodynamic reliability given by Dr. Alec Feinberg. The talk covers using thermodynamics and non-equilibrium thermodynamics to assess damage in systems and components. It discusses how the second law of thermodynamics can be applied to describe aging damage. Examples are provided to show calculating entropy damage and aging ratios for simple resistor aging and complex systems. The talk also discusses measuring entropy damage over time and modeling degradation paths. Overall, the document introduces the concept of using thermodynamics to assess reliability and aging in engineered systems.
This document outlines key elements for establishing a sustainable root cause analysis program. It discusses the importance of having an involved sponsor, a clear resourcing plan with defined roles and responsibilities, formal triggers for when analyses should be conducted, protocols for collecting and preserving evidence, standardized reporting, and a system for tracking action items to completion. It also emphasizes tracking the financial value of the program and conducting audits to ensure the program's sustainability over the long term (minimum of 3 years). The overall message is that root cause analysis requires a formal, long-term commitment and cultural change, not just a one-time effort, to truly solve problems and prevent their recurrence.
Dynamic vs. Traditional Probabilistic Risk Assessment Methodologies - by Huai...ASQ Reliability Division
Ā
The document compares dynamic and traditional probabilistic risk assessment methodologies. Traditional methodologies like fault trees, event sequence diagrams, and FMECA require analysts to assess possible system failures. Dynamic methodologies like Monte Carlo simulation use executable models to simulate system behavior probabilistically over time and automatically generate event sequences. Dynamic methods can address limitations of traditional approaches that rely heavily on analyst judgment.
This document discusses efficient reliability demonstration tests that can reduce sample sizes and test times compared to conventional methods. It presents principles for test time reduction using degradation measurements during testing. Methods are provided for calculating optimal test plans that minimize costs while meeting reliability requirements and risk constraints. Decision rules are given for terminating tests early based on degradation measurements and risk estimates. An example application demonstrates how the approach can significantly reduce testing costs.
This document discusses using degradation data to model reliability and predict failure times. It begins by explaining how failures can be caused by degradation over time in mechanical components and integrated circuits. Examples of degradation mechanisms like creep, fatigue, and corrosion are provided. The document then discusses using non-destructive and destructive inspection of degradation parameters to build models and predict reliability. Accelerated degradation testing is also covered as a way to quickly generate degradation data under elevated stress conditions. Overall, the document provides an overview of modeling reliability using degradation data and predicting failure times based on degradation paths.
The webinar discusses innovation and the innovation process. It defines innovation as the successful conversion of new concepts and knowledge into new products and processes that deliver new customer value. The innovation process involves 4 steps: 1) finding opportunities, 2) connecting to conceptual solutions, 3) making solutions user-friendly, and 4) getting to market. Different personality types play different roles in innovation, including creators, connectors, developers, and doers. Reliability is also an important consideration in innovation to ensure solutions work well for customers. The webinar encourages participants to get involved in their company's innovation efforts or help establish an innovation process.
Objectives
ļ To provide an introduction to the statistical analysis of
failure time data
ļ To discuss the impact of data censoring on data analysis
ļ To demonstrate software tools for reliability data analysis
Organization
ļ Reliability definition
ļ Characteristics of reliability data
ļ Statistical analysis of censored reliability data
Objectives
ļ” To understand Weibull distribution
ļ” To be able to use Weibull plot for failure time analysis and
diagnosis
ļ” To be able to use software to do data analysis
Organization
ļ” Distribution model
ļ” Parameter estimation
ļ” Regression analysis
This document summarizes an ASQ webinar on reliably solving intractable problems. It outlines 8 principles for producing breakthroughs: 1) use divergent problem solving, 2) generate paradigm shifts, 3) agree on success criteria, 4) start with a strong commitment, 5) separate creative and analytical thinking, 6) involve stakeholders, 7) use consensus decision making, and 8) anticipate issues. It then describes a 13-step conversation process to resolve obstacles following these principles in 4 phases: establishing foundations, envisioning the future, establishing solutions, and ensuring support. The document provides tips for facilitating each step of the process.
With the increase in global competition, more and more costumers consider reliability as one of their primary deciding factors, when purchasing new products. Several companies have invested in developing their own Design for Reliability (DFR) processes and roadmaps in order to be able to meet those requirements and compete in todayās market. This presentation will describe the DFR roadmap and how to effectively use it to ensure the success of the reliability program by focusing on the following DFR elements.
Improved QFN Reliability Process by John Ganjei. John will talk about the improvements in the reliability process in this webinar.
It is free to attend - see www.reliabilitycalendar.org/webinars/ to register for upcoming events.
Data Acquisition: A Key Challenge for Quality and Reliability ImprovementASQ Reliability Division
Ā
The document discusses challenges with data acquisition for quality and reliability analysis. It presents a 5-step process called DEUPM for targeted data acquisition: 1) Define the problem, 2) Evaluate existing data, 3) Understand data acquisition opportunities and limitations, 4) Plan data acquisition and analysis, 5) Monitor, clean data, analyze and validate. An example of using this process to validate the reliability of a new washing machine design within 6 months is provided to illustrate the steps. The process aims to ensure data acquisition is disciplined and sufficient to answer reliability questions.
The document discusses applying Failure Mode and Effects Criticality Analysis (FMECA) to software engineering. It describes FMECA as a structured method to anticipate failures and their causes. The document outlines how FMECA was originally used in industries like aerospace and nuclear engineering but has expanded to other domains. It then discusses applying FMECA at different levels of a software project, from requirements to architecture to design to code. The document advocates an "enlightened approach" to using FMECA across all representations and abstractions of software.
Astr2013 tutorial by mike silverman of ops a la carte 40 years of halt, wha...ASQ Reliability Division
Ā
This document summarizes a presentation titled "40 Years of HALT: What Have We Learned?" by Mike Silverman. The presentation discusses the evolution of Highly Accelerated Life Testing (HALT) over the past 40 years, including what HALT is and is not, basic HALT methodology, links between HALT and design for reliability, new advances in HALT, current adoption rates of HALT, and the future of HALT. The presentation aims to share lessons learned from thousands of engineers who have used HALT techniques over the past 40 years to improve product design and reliability.
This document summarizes a webinar on cost-optimized reliability test planning and decision-making through Bayesian methods. The webinar covered:
1. A brief review of Bayesian statistics and how it allows incorporating prior knowledge to optimize test planning.
2. Examples of how Bayesian methods can reduce required sample sizes for reliability testing compared to classical methods.
3. How Bayesian analysis allows improved comparative reliability decision-making between systems by properly accounting for relative failure rates.
The webinar provided specific examples of applying Bayesian priors and posteriors to reliability testing problems to reduce testing time and costs while maintaining or improving reliability assessment.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ā
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. š This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. š»
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. š„ļø
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. š
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether youāre at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. Weāll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Ā
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
Ā
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Ā
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
Ā
An English š¬š§ translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech šØšæ version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Ā
Are you ready to revolutionize how you handle data? Join us for a webinar where weāll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, weāll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sourcesāfrom PDF floorplans to web pagesāusing FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether itās populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
Weāll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
Ā
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
Ā
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Ā
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
Ā
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power gridās behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Ā
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
2. ASQ
Ā Reliability
Ā Division
Ā
English
Ā Webinar
Ā Series
Ā
One
Ā of
Ā the
Ā monthly
Ā webinars
Ā
on
Ā topics
Ā of
Ā interest
Ā to
Ā
reliability
Ā engineers.
Ā
To
Ā view
Ā recorded
Ā webinar
Ā (available
Ā to
Ā ASQ
Ā Reliability
Ā
Division
Ā members
Ā only)
Ā visit
Ā asq.org/reliability
Ā
Ā
To
Ā sign
Ā up
Ā for
Ā the
Ā free
Ā and
Ā available
Ā to
Ā anyone
Ā live
Ā
webinars
Ā visit
Ā reliabilitycalendar.org
Ā and
Ā select
Ā English
Ā
Webinars
Ā to
Ā ļ¬nd
Ā links
Ā to
Ā register
Ā for
Ā upcoming
Ā events
Ā
hGp://reliabilitycalendar.org/
webinars/
Ā
3. Comparing Individual Reliability to
Population Reliability for Aging Systemsp y g g y
Dr. Christine M. Anderson-Cook, LANL (candcook@lanl.gov)
Dr Lu Lu University of South FloridaDr. Lu Lu, University of South Florida
July 2013
https://sites google com/site/poprellu/home
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
https://sites.google.com/site/poprellu/home
4. | Los Alamos National Laboratory |
Outline
1. Individual and Population Reliability
a. Definition
b. When to use which
c. Overview of methods for population reliability
2. Age Only examples (QE paper, 2011)
a. Weibull (observations: time to failure)
b. Probit (obs: Pass/Fail at specific age)
3. Age + Usage example (QREI, 2011)
( / f &a. Probit (obs: Pass/Fail at specific age &
usage)
4 Conclusions
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 2
4. Conclusions
5. | Los Alamos National Laboratory |
Focus on ReliabilityFocus on Reliability
ļ§ Definition of reliability:Definition of reliability:
āthe probability that a system will continue to
perform its intended functions until a specified
point in time under encountered use conditions.ā
Define boundaries of system
(peripherals, human interface)
Often exposure to
environmental conditions
Multiuse systems may have different
thresholds for working (most severe,
typical)
environmental conditions
may impact reliability
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 3
yp )
6. | Los Alamos National Laboratory |
Individual vs Population Reliability
ility
ability
ystemReliabi
pulationReli
Age (months) Time into future (months)
Sy
Po
ļ§ Individual System Summary (IndRel): For a given
system with specified age, what is its reliability?
ļ§ Population Aggregate Summary (PopRel): For aļ§ Population Aggregate Summary (PopRel): For a
population of systems (each with possibly different
ages), what is the probability that a randomly chosen
system will work at the current or some future time?
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 4
system will work at the current or some future time?
7. | Los Alamos National Laboratory |
Two Different SummariesTwo Different Summaries
ļ§ Relevance
ā IndRel: for managing individual units, perhaps to
remo e them from the pop lation if the become tooremove them from the population if they become too
unreliable, or to send them in for scheduled
maintenance.
ā PopRel: for managing the population and require aPopRel: for managing the population and require a
given performance level across the population at this or
some future points in time.
ļ§ Information needed
ā Summary of results from testing various systems (both)
ā An appropriate statistical model for the reliability given
age (both)
lplus
ā the age demographics of the population of interest at
the current time (PopRel)
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 5
8. | Los Alamos National Laboratory |
Calculation of PopRel
IndRelIndRel
Age
Demographics
ReliabilitySystem
For an individual system we
Age (months) Age (months)
For an individual system, we
can predict its reliability now
and into the future given its
current age
onReliability
Time into future (months)
Populatio
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 6
Time into future (months)
9. | Los Alamos National Laboratory |
Calculation of PopRel (contād)
IndRelIndRel
Age
Demographics
ReliabilitySystem
Age (months) Age (months)
abilitySystemRelia
For each system in the population, we can determine its predicted
li bilit b d it t
Time into future (months) Time into future (months) Time into future (months)
S
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 7
reliability based on its current age
10. | Los Alamos National Laboratory |
Calculation of PopRel (contād)Calculation of PopRel (cont d)
iabilitySystemReli
Now we use the estimates of all the individual predicted reliabilities to determine
Time into future (months) Time into future (months) Time into future (months)
PopRel
the overall reliability of the population
ReliabilityPopulation
Time into future (months)
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 8
Note: this could be calculated for any sub-population
Time into future (months)
11. | Los Alamos National Laboratory |
Reliability summary questionsReliability summary questions
IndRel: For a given system with specified, what is its reliability?
PopRel: For a population of systems (each with possibly different ages),p p p y ( p y g )
what is the probability that a randomly chosen system will work at a
given point in time? Or what fraction of the parts will work at a given
point in time?
Questions: Which summary is more of interest,
1. If you own a single item? IndRely g
2. If you are own a collection of items used by your department?
3. If you work on maintaining the systems?
4 If id i h i t t l t h t
IndRel
PopRel
IndRel or PopRel
4. If you are considering purchasing new systems to supplement what
is currently available? PopRel
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 9
12. | Los Alamos National Laboratory |
Example 1:
LCD j t l i W ib ll Di t ib tiLCD projector lamps using Weibull Distribution
LCD Projection LCD Projection LCD Projection
M d l H M d l H M d l H
Observed failure time of 31
Model Hours Model Hours Model Hours
1 182 1 974 2 380
1 230 1 1755 2 418
1 244 2 50 2 584
lamps (3 different models)
182 h
230 h 1 387 2 81 2 1205
1 464 2 131 2 1407
1 473 2 158 2 1752
1 600 2 174 3 34
230 h
244 h
ā¦
1895 h
1 627 2 300 3 39
1 660 2 332 3 274
1 798 2 345 3 1895
1 954
1895 h
1 954
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 10
13. | Los Alamos National Laboratory |
Step 1: Estimate Individual
Reliabilityy
ļ§ Weibull model:
1
( | ) exp( )f t t tļ¢ ļ¢
ļ¬ ļ¢ ļ¬ļ¢ ļ¬ļ
ļ½
ļ§ Bayesian analysis ā specify priors
( | , ) exp( )f t t tļ¢ ļ¢
ļ¬ ļ¢ ļ¬ļ¢ ļ¬ļ½ ļ
~ (2.5, 2350)
~ (1 1)
G am m a
G am m a
ļ¬
ļ¢
ļ§ Estimate the posterior (WinBUGS)
~ (1,1)G am m aļ¢
( , | ) ( | , ) ( , )f y f y fļ¬ ļ¢ ļ¬ ļ¢ ļ¬ ļ¢ļµ
NMCMC Ī»,Ā Ī² estimatesĀ areĀ generatedĀ byĀ WinBUGS toĀ
i t th t i di t ib ti
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 11
approximateĀ theĀ posteriorĀ distribution
14. | Los Alamos National Laboratory |
Individual Reliability EstimatesIndividual Reliability Estimates
ility
ility
Model 1 Model 2
ystemReliab
ystemReliab
Age (hours)
Sy
Age (hours)
Sy
ļ¬ ļ¢ eliability
Model 3
ļ¬ ļ¢
At age=t
NMCMC
estimates
SystemRe
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 12
Age (hours)
15. | Los Alamos National Laboratory |
Step 2: Estimate Population Reliability
ļ§ For a population of 51 Model 1 units
Frequency
A (h )Age (hours)
ļ¬ ļ¢
At each time,t,
estimate reliability for
eliability
ļ¬ ļ¢
NMCMC
estimates
estimate reliability for
each unit, then combine
to get PopRel estimate
1
( ) ( )ip t p tļ½ ļ„
PopulationRe
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 13
( ) ( )r i
i U
p t p t
N ļ
ļ„
P
Time into future (hours)
16. | Los Alamos National Laboratory |
Population Reliability ResultsPopulation Reliability Results
ncy
PopRel
nReliability
Frequen
Population
Age (hours)
Time into future (hours)
Reliability
IndRel
Could we have predicted this poor
population reliability? SystemR
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 14
Age (hours)
17. | Los Alamos National Laboratory |
Answering Questions ā IndRel or PopRel
1. For a unit that is 100 hours old, what is the
probability that it will work?
2. What is the probability that a random unit will work
?now?
3 What is the probability of the unit I currently have3. What is the probability of the unit I currently have
working when I turn it on?
4. My team has 5 units which we use regularly. What
is the probability that a random unit from there will
k?
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 15
work?
18. | Los Alamos National Laboratory |
PopRel for More Complex PopulationPopRel for More Complex Population
Overall
abilityPopulationRelia
Time into future (hours)
Model 1 Model 2 Model 3
Reliability
nReliability
nReliability
Population
Population
Population
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 16
Time into future (hours) Time into future (hours) Time into future (hours)
19. | Los Alamos National Laboratory |
Frequentist Options for EstimationFrequentist Options for Estimation
ļ§ Estimate Ī»,Ī² (andĀ theirĀ covarianceĀ matrix)Ā usingĀ
maximumĀ likelihood
ļ§ IndRel:Ā FromĀ thisĀ confidenceĀ intervalsĀ forĀ reliabilityĀ
areĀ possibleĀ atĀ allĀ agesĀ ofĀ theĀ system
ļ§ PopRel:
a. GenerateĀ MĀ drawsĀ fromĀ theĀ bivariate normalĀ
di t ib ti th d t bt i Mdistribution,Ā useĀ theseĀ drawsĀ toĀ obtainĀ MĀ
estimatesĀ ofĀ Ī»,Ī² andĀ useĀ toĀ buildĀ anĀ empiricalĀ
C.I.Ā forĀ PopRelp
b. SampleĀ withĀ replacementĀ fromĀ theĀ originalĀ data,Ā
useĀ thisĀ toĀ obtainĀ MĀ estimatesĀ ofĀ Ī»,Ī² ā¦.Ā
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 17
20. | Los Alamos National Laboratory |
Example 2: Missiles using Probit ModelExample 2: Missiles using Probit Model
227 missiles tested (destructive testing)
Current Population
Model: ~ ( )i iY Bernoulli p 1 Pass
Y
ļ¬
ļ
Age = 40 months
Age = 90 months
0 1( )i
i
age
p
ļ¢ ļ¢ļ«ļ¦ ļ¶
ļ½ ļļ§ ļ·
ļØ ļø
0
iY
Fail
ļ½ ļ
ļ®
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 18
s
ļ§ ļ·
ļØ ļø
21. | Los Alamos National Laboratory |
Step 1: Estimating IndRel
ļ§ Bayesian analysis to estimate
parameters (WinBUGS)
0 1( )i
i
age
p
s
ļ¢ ļ¢ļ«ļ¦ ļ¶
ļ½ ļļ§ ļ·
ļØ ļø
0 1 sļ¢ ļ¢
NMCMC
estimatesestimates
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 19
22. | Los Alamos National Laboratory |
Step 2: Estimate PopRel
ļ¢ ļ¢Current Population At each time,t,
estimate reliability for
each unit, then combine
to get PopRel estimate
0 1 sļ¢ ļ¢
to get PopRel estimate
1
( ) ( )r i
i U
p t p t
N ļ
ļ½ ļ„
NMCMC
estimates
abilityPopulationRelia
PopRel
IndRel P
Time into future (months)
PopRel
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 20
23. | Los Alamos National Laboratory |
Modeling Reliability as a Function of Age
d Oth I f ti (U E )and Other Information (Usage or Exposure)
ļ§ For Example 2, reliability was estimated as a
function of the age of the systemfunction of the age of the system
ā If a system is x months old today, then in a month
it will be (x+1) months old
ļ§ But what if reliability is a function of age and usage?
(eg. car reliability typically modeled with age and
mileage)mileage)
ā If a car is 24 months old and has gone 30000
miles, what will these values be in 1 month?
ā Historical usage pattern can be helpful for
prediction, but will introduce some additional
variability
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 21
variability.
24. | Los Alamos National Laboratory |
Example3:
Missiles Modeled with Probit for Ageg
and Usage
Model: ~ ( )Y Bernoulli pModel: ~ ( )i iY Bernoulli p
1
0
i
Pass
Y
Fail
ļ¬
ļ½ ļ
ļ®
0 1 2( ) ( )i i
i
age usage
p
ļ¢ ļ¢ ļ¢ļ« ļ«ļ¦ ļ¶
ļ½ ļļ§ ļ·
ļØ ļø
0 Failļ®
ip
s
ļ§ ļ·
ļØ ļø
Estimation of IndRel ā unchanged
C ld B i i i f d l
Here,
#t f- Could use Bayesian estimation for model parameters
- Could use maximum likelihood to get estimates
usage = #transfers
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 22
25. | Los Alamos National Laboratory |
Estimating PopRel ā need to predict
f tfuture usage
ļ§ By looking at the rate at
hi h i
ge
which usage increases,
we can predict what
future usage values are
Usag
g
likely, assuming the
same pattern of usage
P ibl t
Age
Usage range
predicted from
historical data
ļ§ Possible sources to
describe the pattern
ā Test data
ge
Current
values
Test data
ā Current population
ā User specified
Usag
Added variability
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 23
distribution
Age
26. | Los Alamos National Laboratory |
Obtaining the Usage Rate Distribution
ļ§ Historical (test data, current population, or both)
Index Age Usage Usage Rate Use as population to draw
from by sampling with
1 a1 u1
2 a2 u2
u1/a1
u2/a2
from by sampling with
replacement
Create a distribution
ļ§ User specified distribution
ā¦ ā¦ C eate a d st but o
which adequately
represents usage rate
center and spread
ļ§ User specified distribution
ā Allows flexibility to specify change in anticipated usage
Create a distribution which adequately
represents usage rate center and spread
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 24
27. | Los Alamos National Laboratory |
Estimating PopRel R t f tiEstimating PopRel
Individual system reliability PopRel at
Repeat for many times
0 1 2, , ,sļ¢ ļ¢ ļ¢ Usage rate
r(1)
Individual system reliability
estimates at each time, t
(1) (1) (1) (1)
0 1 2, , ,sļ¢ ļ¢ ļ¢
PopRel at
each time, t
r(2)
ā¦
0 1 2ļ¢ ļ¢ ļ¢
(2) (2) (2) (2)
0 1 2, , ,sļ¢ ļ¢ ļ¢
ā¦ ( ) ( ) ( ) ( ) ( ) ( )
( ) 0 1 2( ) ( * )
( )
j j j j j j
j i i iage usage rate ageļ¢ ļ¢ ļ¢ļ¦ ļ¶ļ« ļ« ļ«
ļļ§ ļ·
NMCMC
ā¦ā¦ ( ) 0 1 2
( )
( ) ( )
( )j i i i
i j
age usage ate age
p t
s
ļ¢ ļ¢ ļ¢ļ¦ ļ¶
ļ½ ļļ§ ļ·
ļØ ļø
1
( ) ( )t tļ„MCMC
estimates
Obtain NMCMC values
from characterizing distribution
( ) ( )r i
i U
p t p t
N ļ
ļ½ ļ„
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 25
from characterizing distribution
28. | Los Alamos National Laboratory |
PopRel for Missile population
Used historical
usage information
from both testedfrom both tested
samples and current
population
onReliabilityPopulatio
PopRel (assuming continued
same pattern of usage for
overall population)
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 26
Time into future (months)
p p )
29. | Los Alamos National Laboratory |
Conclusions
ļ§ Understanding which summary is appropriate to answer which
question is key to good decision-making
ā IndRel answers āFor a given system with specified age (and
) h i i li bili ?usage), what is its reliability?
ā PopRel answers āFor a population of systems, what is the
probability that a randomly chosen system will work at the
current or some future time?current or some future time?
ļ§ What is needed?
ā Summary of results from testing various systems [both]
A statistical model for the reliability given age and usage [both]ā A statistical model for the reliability given age and usage [both]
plus
ā The age (and usage) demographics of the population at the current
time [PopRel]
ļ§ Predicting the age of systems into the future is straightforward, but
additional assumptions about future usage of units in the population are
needed to obtain a sensible PopRel estimate
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 27
30. | Los Alamos National Laboratory |
ReferencesReferences
1. Lu, L., Anderson-Cook, C.M. (2011) āPrediction of, , , ( )
Reliability of an Arbitrary System from a Finite
Populationā Quality Engineering 23 71-83.
2. Lu, L., Anderson-Cook, C.M. (2011) āUsing Age and
Usage for Prediction of Reliability of an ArbitraryUsage for Prediction of Reliability of an Arbitrary
System from a Finite Populationā Quality and
Reliability Engineering International 27 179-190.
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
UNCLASSIFIED
July 2013 | UNCLASSIFIED | 28