The document summarizes a Bayesian webinar on general Bayesian methods for reliability data analysis. It provides an outline of the webinar covering traditional vs Bayesian reliability frameworks, examples of applying Bayesian methods to Weibull distribution, accelerated life test data and repeated measure degradation data. OpenBUGS code is presented for the examples. The webinar aims to illustrate how Bayesian methods allow incorporating prior knowledge and provide advantages over traditional methods in certain applications.
With product reliability demonstration test planning and execution weighing heavily on cost, availability and schedule factors, Bayesian methods offer an intelligent way of incorporating engineering knowledge based on historical information into data analysis and interpretation, resulting in an overall more precise and less resource intensive failure rate estimation. This talk consists of three parts
1. Introduction to Bayesian vs Frequentist statistical approaches
2. Bayesian formalism for reliability estimation
3. Product/component case studies and examples
Unit Testing Case Study for COJUG - 05.11.2010Nick Watts
Nick Watts' "Unit Testing Case Study: How Ohio Mutual Insurance Group Got Started" presentation. Presented to the Columbus Java Users Group on May 11th, 2010 in Dublin, Ohio.
Approximate Continuous Query Answering Over Streams and Dynamic Linked Data SetsSoheila Dehghanzadeh
To perform complex tasks, RDF Stream Processing Web applications evaluate continuous queries over streams and quasi-static (background) data. While the former are pushed in the application, the latter are continuously retrieved from the sources. As soon as the background data increase the volume and become distributed over the Web, the cost to retrieve them increases and applications become unresponsive.
In this paper, we address the problem of optimizing the evaluation of these queries by leveraging local views on background data. Local views enhance performance, but require maintenance processes, because changes in the background data sources are not automatically reflected in the application.
We propose a two-step query-driven maintenance process to maintain the local view: it exploits information from the query (e.g., the sliding window definition and the current window content) to maintain the local view based on user-defined Quality of Service constraints.
Experimental evaluation show the effectiveness of the approach.
Webinar - Maximizing Requirements Value Throughout the Product Lifecycle Seapine Software
The document discusses maximizing value from requirements throughout the product lifecycle. It argues that defining and delivering customer value is challenging due to disconnects between developers and users. Many projects fail or lose benefits due to problems originating in requirements practices. The document advocates treating requirements as a discipline through practices like just-in-time delivery of accurate, contextual insights. This involves skills like cultivating diverse sources and tools to identify the right information stakeholders need.
This chapter discusses basic probability concepts including defining probability as a numerical measure between 0 and 1, explaining sample spaces and events, visualizing events using contingency tables and tree diagrams, and computing joint, marginal, and conditional probabilities. It introduces key terms like probability, event, sample space, mutually exclusive and collectively exhaustive events. It also covers rules for calculating probabilities of joint, union, and conditional events.
Bayesian Networks - A Brief IntroductionAdnan Masood
- A Bayesian network is a graphical model that depicts probabilistic relationships among variables. It represents a joint probability distribution over variables in a directed acyclic graph with conditional probability tables.
- A Bayesian network consists of a directed acyclic graph whose nodes represent variables and edges represent probabilistic dependencies, along with conditional probability distributions that quantify the relationships.
- Inference using a Bayesian network allows computing probabilities like P(X|evidence) by taking into account the graph structure and probability tables.
With product reliability demonstration test planning and execution weighing heavily on cost, availability and schedule factors, Bayesian methods offer an intelligent way of incorporating engineering knowledge based on historical information into data analysis and interpretation, resulting in an overall more precise and less resource intensive failure rate estimation. This talk consists of three parts
1. Introduction to Bayesian vs Frequentist statistical approaches
2. Bayesian formalism for reliability estimation
3. Product/component case studies and examples
Unit Testing Case Study for COJUG - 05.11.2010Nick Watts
Nick Watts' "Unit Testing Case Study: How Ohio Mutual Insurance Group Got Started" presentation. Presented to the Columbus Java Users Group on May 11th, 2010 in Dublin, Ohio.
Approximate Continuous Query Answering Over Streams and Dynamic Linked Data SetsSoheila Dehghanzadeh
To perform complex tasks, RDF Stream Processing Web applications evaluate continuous queries over streams and quasi-static (background) data. While the former are pushed in the application, the latter are continuously retrieved from the sources. As soon as the background data increase the volume and become distributed over the Web, the cost to retrieve them increases and applications become unresponsive.
In this paper, we address the problem of optimizing the evaluation of these queries by leveraging local views on background data. Local views enhance performance, but require maintenance processes, because changes in the background data sources are not automatically reflected in the application.
We propose a two-step query-driven maintenance process to maintain the local view: it exploits information from the query (e.g., the sliding window definition and the current window content) to maintain the local view based on user-defined Quality of Service constraints.
Experimental evaluation show the effectiveness of the approach.
Webinar - Maximizing Requirements Value Throughout the Product Lifecycle Seapine Software
The document discusses maximizing value from requirements throughout the product lifecycle. It argues that defining and delivering customer value is challenging due to disconnects between developers and users. Many projects fail or lose benefits due to problems originating in requirements practices. The document advocates treating requirements as a discipline through practices like just-in-time delivery of accurate, contextual insights. This involves skills like cultivating diverse sources and tools to identify the right information stakeholders need.
This chapter discusses basic probability concepts including defining probability as a numerical measure between 0 and 1, explaining sample spaces and events, visualizing events using contingency tables and tree diagrams, and computing joint, marginal, and conditional probabilities. It introduces key terms like probability, event, sample space, mutually exclusive and collectively exhaustive events. It also covers rules for calculating probabilities of joint, union, and conditional events.
Bayesian Networks - A Brief IntroductionAdnan Masood
- A Bayesian network is a graphical model that depicts probabilistic relationships among variables. It represents a joint probability distribution over variables in a directed acyclic graph with conditional probability tables.
- A Bayesian network consists of a directed acyclic graph whose nodes represent variables and edges represent probabilistic dependencies, along with conditional probability distributions that quantify the relationships.
- Inference using a Bayesian network allows computing probabilities like P(X|evidence) by taking into account the graph structure and probability tables.
Products Reliability Prediction Model Based on Bayesian ApproachIJAEMSJORNAL
Predicting reliability of new products at their early life time is one of the important issues in the field of reliability. Lack of data in this period of life time causes prediction to be very hard and inaccurate. This paper proposes a model for predicting non repairable product’s reliability early after its production and introduction to the market. It is assumed that time to failure of this product has a Weibull distribution with known shape parameter but the scale parameter is a random variable that could have different distributions like gamma, inverted gamma and truncated normal. Bayesian statistics is used to join prior information on past product failure and sparse few field data on current product’s performance to overcome lack of data problem which is a major problem in the early reliability prediction of new products. The Bayesian model provides a more accurate and logical prediction compared to classical methods and indications are favorable regarding the model’s practicality in industrial applications. This model has managerial usefulness because of giving more accurate predictions. In all previous studies, there is no comprehensive and precise model for reliability prediction. Different from other studies, we present a definite form for scale parameter of different prior distributions. We use a special form of Weibull distribution which leads us to this definite form. This model provides a suitable estimation value from uncertain environments of parameters because it uses more information for prediction.
Benchmark METRICS THAT MATTER October 4 2012BenchmarkQA
Betty Schaar and Jeff Roth presented this at BenchmarkQA's fall 2012 Software Quality Forum, challenging attendees to rethink the metrics they're generating. Metrics without the context of the project mean nothing.
Data Warehouse Testing—The Next Opportunity for QA LeadersTricentis
This document discusses data warehouse and business intelligence (DWH/BI) testing. It notes that DWH/BI testing requires different skills than traditional software testing, including strong SQL and data profiling abilities. Manual DWH/BI testing is time-consuming and error-prone. The document advocates increasing test automation for DWH/BI projects using tools familiar from UI and API testing. It provides an overview of different types of DWH/BI testing and components to test, and proposes a maturity model for improving DWH/BI testing processes. Challenges in DWH/BI testing include lack of understanding, tools, and collaboration between teams like development and QA. The document recommends conducting an assessment and instituting a testing methodology
Healthcare Simulation:
Bed Allocation in Hospital:
Simcad Pro is a sophisticated simulation engine that can be utilized to improve the overall efficiency of the Emergency Room. The use of Simcad Pro has been shown to improve Emergency Room efficiency, accuracy and effectiveness, which in turn improves patient outcomes and the quality of care that is received. In addition, staff allocation and efficiency can be better forecasted and analyzed, reducing the overall cost of care without sacrificing the quality of care.
The document compares the quality of face images from three datasets - a legacy IDOC criminal database, a newer electronic IDOC database, and the FERET standard database. It analyzes the images using 28 quality metrics related to factors like scene, photography, digital attributes, and algorithms. The results show that the legacy IDOC images scored higher on most metrics than the electronic IDOC images, but the FERET images scored highest overall. The conclusions suggest room for improvement in the operational IDOC data quality and the need for algorithm developers to adjust to real-world image variability.
CRISP-DM: a data science project methodologySergey Shelpuk
This document outlines the methodology for a data science project using the Cross-Industry Standard Process for Data Mining (CRISP-DM). It describes the 6 phases of the project - business understanding, data understanding, data preparation, modeling, evaluation, and deployment. For each phase, it provides an overview of the key steps and asks questions to determine readiness to move to the next phase of the project. The overall goal is to successfully apply a standard data science methodology to gain business value from data.
AI&BigData Lab 2016. Сергей Шельпук: Методология Data Science проектовGeeksLab Odessa
This document outlines the methodology for a data science project using the Cross-Industry Standard Process for Data Mining (CRISP-DM). It describes the 6 phases of the project - business understanding, data understanding, data preparation, modeling, evaluation, and deployment. For each phase, it provides examples of the types of activities and questions that should be addressed to successfully complete that phase of the project.
Visionet is a software services company established in 1995 with over 1,500 employees across multiple locations. It has a long track record of over 16 years providing software services, particularly in the retail, financial, and distribution industries. It offers capabilities across application development, maintenance and support, enterprise application integration, reporting and business intelligence, and portals and collaboration. Visionet has skills and experience in technologies from legacy systems to Java/J2EE, .NET, Oracle, and others.
Physics of Failure (also known as Reliability Physics) is a science-based approach for achieving Reliability by Design. The approach is based on research to identify and understand the processes that initiate and propagate mechanisms that ultimately results in failure. This knowledge when used in Computer Aided Engineering (CAE) durability simulations and reliability assessment can evaluate if a new design, under actual operating is susceptible to the root causes of failure such as fatigue, fracture, wear, and corrosion during the intended service life of the product.
The objective is to identify and eliminate potential failure mechanisms in order to prevent operational failures through stress-strength analysis to produce a robust design and aid in the selection of capable manufacturing practices. This is accomplished by modeling the material strength and architecture of the components and technologies a product is based upon to evaluating their ability to endure the life-cycle usage and environmental stress conditions the product is expected to encounter over its service life in the field or during durability or reliability qualification tests.
The ability to identify and quantify the specific hazard risks timeline of specifics failure risks in a new product while it is still on the drawing board (or CAD screen) enables a product team to design reliability into a product by revising the design to eliminate or mitigate failure risks. This capability results in a form of Virtual Validation and Virtual Reliability Growth during the a product’s design phase that can be implemented faster and at lower costs than the traditional Design-Build-Test-Fixed approach to Reliability Growth during a product’s development and test phase.
This webinar compares classical reliability concepts and relates them to the PoF approach as applied to Electrical/Electronic (E/E) System and technologies. This webinar is intended for E/E Product Engineers, Validation/Test Engineers, Quality, Reliability and Product Assurance Personnel, CAE Modeling Analysts, R&D Staff and their supervisor.
The document provides an overview of the Foundations of Business Analysis certificate course. The course consists of 3 modules that cover the disciplines and practices of business analysis: Foundations of Business Analysis, Leadership in Business Analysis, and Tools and Techniques in Business Analysis. The introductory module outlines the course content over 12 weeks, covering topics such as business analysis competencies, techniques, requirements elicitation, and case study assignments. The document defines business analysis and compares the roles and certifications of business analysts and project managers.
The document discusses Vision.bi Quality Gates, a centralized data quality solution. It provides an overview of quality gates and how they can be used for integration and business intelligence projects. Quality gates help define tests to constantly monitor and improve data quality. The document outlines various test types like check sums, referential integrity checks, and execution flow tests. It demonstrates how quality gates provide alerts, ensure data integrity, and validate data warehouse cubes. Clients currently using Vision.bi are also listed.
Slides from lecture style tutorial on data quality for ML delivered at SIGKDD 2021.
The quality of training data has a huge impact on the efficiency, accuracy and complexity of machine learning tasks. Data remains susceptible to errors or irregularities that may be introduced during collection, aggregation or annotation stage. This necessitates profiling and assessment of data to understand its suitability for machine learning tasks and failure to do so can result in inaccurate analytics and unreliable decisions. While researchers and practitioners have focused on improving the quality of models (such as neural architecture search and automated feature selection), there are limited efforts towards improving the data quality.
Assessing the quality of the data across intelligently designed metrics and developing corresponding transformation operations to address the quality gaps helps to reduce the effort of a data scientist for iterative debugging of the ML pipeline to improve model performance. This tutorial highlights the importance of analysing data quality in terms of its value for machine learning applications. Finding the data quality issues in data helps different personas like data stewards, data scientists, subject matter experts, or machine learning scientists to get relevant data insights and take remedial actions to rectify any issue. This tutorial surveys all the important data quality related approaches for structured, unstructured and spatio-temporal domains discussed in literature, focusing on the intuition behind them, highlighting their strengths and similarities, and illustrates their applicability to real-world problems.
The document discusses various aspects of the agile software testing process including test planning, test cases, test reporting and automation. It provides details on gathering requirements and data for test strategies, defining test cases with steps, expected results and test data. Integration and smoke testing are also covered along with build coverage, deployment planning and defect tracking matrices.
The document discusses big data and big analytics. It notes that big data refers to situations where the volume, velocity, and variety of data exceeds an organization's storage and processing capabilities. It then outlines SAS's approach to high-performance analytics, including in-memory architecture, grid computing, and in-database analytics to enable real-time insights from large and diverse datasets. Several case studies demonstrate how SAS solutions have helped customers significantly reduce analytics processing times and improve outcomes.
This document provides links to resources from JISC's programme on assessment and feedback. It includes links to projects exploring topics like adaptive testing, collaborative assessment, international developments, and question banks. A diagram maps these projects and topics to themes like evaluating CAA systems, assessing skills and enhancing learning, and strategic developments. The document also discusses emerging areas like learning analytics and economic impact measures.
This document provides an overview of quality tools and topics. It begins by explaining the importance of quality and describing hidden costs of non-conformance. It then discusses problems with inspection-based quality control and demonstrates an inspection exercise. The document proceeds to introduce total quality management and the seven basic quality tools - flow charts, cause and effect diagrams, check sheets, histograms, Pareto charts, scatter diagrams, and control charts. Examples and exercises are provided for several of these tools. The document concludes by summarizing key points and providing a reading list.
This presentation on batch process analytics was given at Emerson Exchange, 2010. A overview of batch data analytics is presented and information provided on a field trail of on-line batch data analytics at the Lubrizol, Rouen, France plant.
The presentation of a paper entitled "Unsupervised ensemble of experts (EoE) framework for automatic binarization of document images" to be presented in ICDAR 2013, Washingthon, DC, USA (August 25h-28th, 2013, on August 27th, 2013.
#Interactive Session by Vivek Patle and Jahnavi Umarji, "Empowering Functiona...Agile Testing Alliance
#Interactive Session by Vivek Patle and Jahnavi Umarji, "Empowering Functional Testing with Support Vector Machines: An Experimental Journey" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
The document describes a Standard Test Data Yield Analysis tool that enables test engineers to perform fast yield analysis and process characterization using standard semiconductor test data files (STDF) without requiring a database. The tool's main features include distribution plots, box plots, parametric wafer maps, Pareto charts, raw data views, software/hardware bin fails, test synopses, and yield information. A benchmark compares the tool to accessing data via a database, showing the tool provides immediate access and analysis capabilities without additional steps or waiting.
This document discusses duty cycle concepts in reliability engineering. It begins with definitions of time-based and stress-condition-based duty cycles. Time-based duty cycle is the proportion of time a system is active, while stress-condition-based duty cycle considers the level of stress applied. The document then discusses how duty cycle manifests differently across various industries and how it is used to calculate reliability, with duty cycle affecting mission time, failure mechanisms, and characteristic life. Examples are provided for hard disk drives to illustrate the effects of duty cycle on acceleration factors and mean time to failure.
The document discusses potential issues with using MTBF/MTTF as the primary reliability metric for the defense and aerospace industries. It argues that MTBF/MTTF provides an incomplete view of reliability across the entire product lifecycle and can result in overly optimistic assessments. The document proposes using an alternative metric called Bx/Lx, which specifies the life point where no more than a certain percentage (like 10%) of failures have occurred. This provides a more comprehensive view of reliability focused on early failures. Overall, the document advocates updating reliability metrics and practices to better reflect physical failure mechanisms.
More Related Content
Similar to General bayesian methods for typical reliability data analysis
Products Reliability Prediction Model Based on Bayesian ApproachIJAEMSJORNAL
Predicting reliability of new products at their early life time is one of the important issues in the field of reliability. Lack of data in this period of life time causes prediction to be very hard and inaccurate. This paper proposes a model for predicting non repairable product’s reliability early after its production and introduction to the market. It is assumed that time to failure of this product has a Weibull distribution with known shape parameter but the scale parameter is a random variable that could have different distributions like gamma, inverted gamma and truncated normal. Bayesian statistics is used to join prior information on past product failure and sparse few field data on current product’s performance to overcome lack of data problem which is a major problem in the early reliability prediction of new products. The Bayesian model provides a more accurate and logical prediction compared to classical methods and indications are favorable regarding the model’s practicality in industrial applications. This model has managerial usefulness because of giving more accurate predictions. In all previous studies, there is no comprehensive and precise model for reliability prediction. Different from other studies, we present a definite form for scale parameter of different prior distributions. We use a special form of Weibull distribution which leads us to this definite form. This model provides a suitable estimation value from uncertain environments of parameters because it uses more information for prediction.
Benchmark METRICS THAT MATTER October 4 2012BenchmarkQA
Betty Schaar and Jeff Roth presented this at BenchmarkQA's fall 2012 Software Quality Forum, challenging attendees to rethink the metrics they're generating. Metrics without the context of the project mean nothing.
Data Warehouse Testing—The Next Opportunity for QA LeadersTricentis
This document discusses data warehouse and business intelligence (DWH/BI) testing. It notes that DWH/BI testing requires different skills than traditional software testing, including strong SQL and data profiling abilities. Manual DWH/BI testing is time-consuming and error-prone. The document advocates increasing test automation for DWH/BI projects using tools familiar from UI and API testing. It provides an overview of different types of DWH/BI testing and components to test, and proposes a maturity model for improving DWH/BI testing processes. Challenges in DWH/BI testing include lack of understanding, tools, and collaboration between teams like development and QA. The document recommends conducting an assessment and instituting a testing methodology
Healthcare Simulation:
Bed Allocation in Hospital:
Simcad Pro is a sophisticated simulation engine that can be utilized to improve the overall efficiency of the Emergency Room. The use of Simcad Pro has been shown to improve Emergency Room efficiency, accuracy and effectiveness, which in turn improves patient outcomes and the quality of care that is received. In addition, staff allocation and efficiency can be better forecasted and analyzed, reducing the overall cost of care without sacrificing the quality of care.
The document compares the quality of face images from three datasets - a legacy IDOC criminal database, a newer electronic IDOC database, and the FERET standard database. It analyzes the images using 28 quality metrics related to factors like scene, photography, digital attributes, and algorithms. The results show that the legacy IDOC images scored higher on most metrics than the electronic IDOC images, but the FERET images scored highest overall. The conclusions suggest room for improvement in the operational IDOC data quality and the need for algorithm developers to adjust to real-world image variability.
CRISP-DM: a data science project methodologySergey Shelpuk
This document outlines the methodology for a data science project using the Cross-Industry Standard Process for Data Mining (CRISP-DM). It describes the 6 phases of the project - business understanding, data understanding, data preparation, modeling, evaluation, and deployment. For each phase, it provides an overview of the key steps and asks questions to determine readiness to move to the next phase of the project. The overall goal is to successfully apply a standard data science methodology to gain business value from data.
AI&BigData Lab 2016. Сергей Шельпук: Методология Data Science проектовGeeksLab Odessa
This document outlines the methodology for a data science project using the Cross-Industry Standard Process for Data Mining (CRISP-DM). It describes the 6 phases of the project - business understanding, data understanding, data preparation, modeling, evaluation, and deployment. For each phase, it provides examples of the types of activities and questions that should be addressed to successfully complete that phase of the project.
Visionet is a software services company established in 1995 with over 1,500 employees across multiple locations. It has a long track record of over 16 years providing software services, particularly in the retail, financial, and distribution industries. It offers capabilities across application development, maintenance and support, enterprise application integration, reporting and business intelligence, and portals and collaboration. Visionet has skills and experience in technologies from legacy systems to Java/J2EE, .NET, Oracle, and others.
Physics of Failure (also known as Reliability Physics) is a science-based approach for achieving Reliability by Design. The approach is based on research to identify and understand the processes that initiate and propagate mechanisms that ultimately results in failure. This knowledge when used in Computer Aided Engineering (CAE) durability simulations and reliability assessment can evaluate if a new design, under actual operating is susceptible to the root causes of failure such as fatigue, fracture, wear, and corrosion during the intended service life of the product.
The objective is to identify and eliminate potential failure mechanisms in order to prevent operational failures through stress-strength analysis to produce a robust design and aid in the selection of capable manufacturing practices. This is accomplished by modeling the material strength and architecture of the components and technologies a product is based upon to evaluating their ability to endure the life-cycle usage and environmental stress conditions the product is expected to encounter over its service life in the field or during durability or reliability qualification tests.
The ability to identify and quantify the specific hazard risks timeline of specifics failure risks in a new product while it is still on the drawing board (or CAD screen) enables a product team to design reliability into a product by revising the design to eliminate or mitigate failure risks. This capability results in a form of Virtual Validation and Virtual Reliability Growth during the a product’s design phase that can be implemented faster and at lower costs than the traditional Design-Build-Test-Fixed approach to Reliability Growth during a product’s development and test phase.
This webinar compares classical reliability concepts and relates them to the PoF approach as applied to Electrical/Electronic (E/E) System and technologies. This webinar is intended for E/E Product Engineers, Validation/Test Engineers, Quality, Reliability and Product Assurance Personnel, CAE Modeling Analysts, R&D Staff and their supervisor.
The document provides an overview of the Foundations of Business Analysis certificate course. The course consists of 3 modules that cover the disciplines and practices of business analysis: Foundations of Business Analysis, Leadership in Business Analysis, and Tools and Techniques in Business Analysis. The introductory module outlines the course content over 12 weeks, covering topics such as business analysis competencies, techniques, requirements elicitation, and case study assignments. The document defines business analysis and compares the roles and certifications of business analysts and project managers.
The document discusses Vision.bi Quality Gates, a centralized data quality solution. It provides an overview of quality gates and how they can be used for integration and business intelligence projects. Quality gates help define tests to constantly monitor and improve data quality. The document outlines various test types like check sums, referential integrity checks, and execution flow tests. It demonstrates how quality gates provide alerts, ensure data integrity, and validate data warehouse cubes. Clients currently using Vision.bi are also listed.
Slides from lecture style tutorial on data quality for ML delivered at SIGKDD 2021.
The quality of training data has a huge impact on the efficiency, accuracy and complexity of machine learning tasks. Data remains susceptible to errors or irregularities that may be introduced during collection, aggregation or annotation stage. This necessitates profiling and assessment of data to understand its suitability for machine learning tasks and failure to do so can result in inaccurate analytics and unreliable decisions. While researchers and practitioners have focused on improving the quality of models (such as neural architecture search and automated feature selection), there are limited efforts towards improving the data quality.
Assessing the quality of the data across intelligently designed metrics and developing corresponding transformation operations to address the quality gaps helps to reduce the effort of a data scientist for iterative debugging of the ML pipeline to improve model performance. This tutorial highlights the importance of analysing data quality in terms of its value for machine learning applications. Finding the data quality issues in data helps different personas like data stewards, data scientists, subject matter experts, or machine learning scientists to get relevant data insights and take remedial actions to rectify any issue. This tutorial surveys all the important data quality related approaches for structured, unstructured and spatio-temporal domains discussed in literature, focusing on the intuition behind them, highlighting their strengths and similarities, and illustrates their applicability to real-world problems.
The document discusses various aspects of the agile software testing process including test planning, test cases, test reporting and automation. It provides details on gathering requirements and data for test strategies, defining test cases with steps, expected results and test data. Integration and smoke testing are also covered along with build coverage, deployment planning and defect tracking matrices.
The document discusses big data and big analytics. It notes that big data refers to situations where the volume, velocity, and variety of data exceeds an organization's storage and processing capabilities. It then outlines SAS's approach to high-performance analytics, including in-memory architecture, grid computing, and in-database analytics to enable real-time insights from large and diverse datasets. Several case studies demonstrate how SAS solutions have helped customers significantly reduce analytics processing times and improve outcomes.
This document provides links to resources from JISC's programme on assessment and feedback. It includes links to projects exploring topics like adaptive testing, collaborative assessment, international developments, and question banks. A diagram maps these projects and topics to themes like evaluating CAA systems, assessing skills and enhancing learning, and strategic developments. The document also discusses emerging areas like learning analytics and economic impact measures.
This document provides an overview of quality tools and topics. It begins by explaining the importance of quality and describing hidden costs of non-conformance. It then discusses problems with inspection-based quality control and demonstrates an inspection exercise. The document proceeds to introduce total quality management and the seven basic quality tools - flow charts, cause and effect diagrams, check sheets, histograms, Pareto charts, scatter diagrams, and control charts. Examples and exercises are provided for several of these tools. The document concludes by summarizing key points and providing a reading list.
This presentation on batch process analytics was given at Emerson Exchange, 2010. A overview of batch data analytics is presented and information provided on a field trail of on-line batch data analytics at the Lubrizol, Rouen, France plant.
The presentation of a paper entitled "Unsupervised ensemble of experts (EoE) framework for automatic binarization of document images" to be presented in ICDAR 2013, Washingthon, DC, USA (August 25h-28th, 2013, on August 27th, 2013.
#Interactive Session by Vivek Patle and Jahnavi Umarji, "Empowering Functiona...Agile Testing Alliance
#Interactive Session by Vivek Patle and Jahnavi Umarji, "Empowering Functional Testing with Support Vector Machines: An Experimental Journey" at #ATAGTR2023.
#ATAGTR2023 was the 8th Edition of Global Testing Retreat.
To know more about #ATAGTR2023, please visit: https://gtr.agiletestingalliance.org/
The document describes a Standard Test Data Yield Analysis tool that enables test engineers to perform fast yield analysis and process characterization using standard semiconductor test data files (STDF) without requiring a database. The tool's main features include distribution plots, box plots, parametric wafer maps, Pareto charts, raw data views, software/hardware bin fails, test synopses, and yield information. A benchmark compares the tool to accessing data via a database, showing the tool provides immediate access and analysis capabilities without additional steps or waiting.
Similar to General bayesian methods for typical reliability data analysis (20)
This document discusses duty cycle concepts in reliability engineering. It begins with definitions of time-based and stress-condition-based duty cycles. Time-based duty cycle is the proportion of time a system is active, while stress-condition-based duty cycle considers the level of stress applied. The document then discusses how duty cycle manifests differently across various industries and how it is used to calculate reliability, with duty cycle affecting mission time, failure mechanisms, and characteristic life. Examples are provided for hard disk drives to illustrate the effects of duty cycle on acceleration factors and mean time to failure.
The document discusses potential issues with using MTBF/MTTF as the primary reliability metric for the defense and aerospace industries. It argues that MTBF/MTTF provides an incomplete view of reliability across the entire product lifecycle and can result in overly optimistic assessments. The document proposes using an alternative metric called Bx/Lx, which specifies the life point where no more than a certain percentage (like 10%) of failures have occurred. This provides a more comprehensive view of reliability focused on early failures. Overall, the document advocates updating reliability metrics and practices to better reflect physical failure mechanisms.
This document provides an overview of a talk on thermodynamic reliability given by Dr. Alec Feinberg. The talk covers using thermodynamics and non-equilibrium thermodynamics to assess damage in systems and components. It discusses how the second law of thermodynamics can be applied to describe aging damage. Examples are provided to show calculating entropy damage and aging ratios for simple resistor aging and complex systems. The talk also discusses measuring entropy damage over time and modeling degradation paths. Overall, the document introduces the concept of using thermodynamics to assess reliability and aging in engineered systems.
This document outlines key elements for establishing a sustainable root cause analysis program. It discusses the importance of having an involved sponsor, a clear resourcing plan with defined roles and responsibilities, formal triggers for when analyses should be conducted, protocols for collecting and preserving evidence, standardized reporting, and a system for tracking action items to completion. It also emphasizes tracking the financial value of the program and conducting audits to ensure the program's sustainability over the long term (minimum of 3 years). The overall message is that root cause analysis requires a formal, long-term commitment and cultural change, not just a one-time effort, to truly solve problems and prevent their recurrence.
Dynamic vs. Traditional Probabilistic Risk Assessment Methodologies - by Huai...ASQ Reliability Division
The document compares dynamic and traditional probabilistic risk assessment methodologies. Traditional methodologies like fault trees, event sequence diagrams, and FMECA require analysts to assess possible system failures. Dynamic methodologies like Monte Carlo simulation use executable models to simulate system behavior probabilistically over time and automatically generate event sequences. Dynamic methods can address limitations of traditional approaches that rely heavily on analyst judgment.
This document discusses efficient reliability demonstration tests that can reduce sample sizes and test times compared to conventional methods. It presents principles for test time reduction using degradation measurements during testing. Methods are provided for calculating optimal test plans that minimize costs while meeting reliability requirements and risk constraints. Decision rules are given for terminating tests early based on degradation measurements and risk estimates. An example application demonstrates how the approach can significantly reduce testing costs.
This document discusses using degradation data to model reliability and predict failure times. It begins by explaining how failures can be caused by degradation over time in mechanical components and integrated circuits. Examples of degradation mechanisms like creep, fatigue, and corrosion are provided. The document then discusses using non-destructive and destructive inspection of degradation parameters to build models and predict reliability. Accelerated degradation testing is also covered as a way to quickly generate degradation data under elevated stress conditions. Overall, the document provides an overview of modeling reliability using degradation data and predicting failure times based on degradation paths.
The webinar discusses innovation and the innovation process. It defines innovation as the successful conversion of new concepts and knowledge into new products and processes that deliver new customer value. The innovation process involves 4 steps: 1) finding opportunities, 2) connecting to conceptual solutions, 3) making solutions user-friendly, and 4) getting to market. Different personality types play different roles in innovation, including creators, connectors, developers, and doers. Reliability is also an important consideration in innovation to ensure solutions work well for customers. The webinar encourages participants to get involved in their company's innovation efforts or help establish an innovation process.
Objectives
To provide an introduction to the statistical analysis of
failure time data
To discuss the impact of data censoring on data analysis
To demonstrate software tools for reliability data analysis
Organization
Reliability definition
Characteristics of reliability data
Statistical analysis of censored reliability data
Objectives
To understand Weibull distribution
To be able to use Weibull plot for failure time analysis and
diagnosis
To be able to use software to do data analysis
Organization
Distribution model
Parameter estimation
Regression analysis
This document summarizes an ASQ webinar on reliably solving intractable problems. It outlines 8 principles for producing breakthroughs: 1) use divergent problem solving, 2) generate paradigm shifts, 3) agree on success criteria, 4) start with a strong commitment, 5) separate creative and analytical thinking, 6) involve stakeholders, 7) use consensus decision making, and 8) anticipate issues. It then describes a 13-step conversation process to resolve obstacles following these principles in 4 phases: establishing foundations, envisioning the future, establishing solutions, and ensuring support. The document provides tips for facilitating each step of the process.
With the increase in global competition, more and more costumers consider reliability as one of their primary deciding factors, when purchasing new products. Several companies have invested in developing their own Design for Reliability (DFR) processes and roadmaps in order to be able to meet those requirements and compete in today’s market. This presentation will describe the DFR roadmap and how to effectively use it to ensure the success of the reliability program by focusing on the following DFR elements.
Improved QFN Reliability Process by John Ganjei. John will talk about the improvements in the reliability process in this webinar.
It is free to attend - see www.reliabilitycalendar.org/webinars/ to register for upcoming events.
Data Acquisition: A Key Challenge for Quality and Reliability ImprovementASQ Reliability Division
The document discusses challenges with data acquisition for quality and reliability analysis. It presents a 5-step process called DEUPM for targeted data acquisition: 1) Define the problem, 2) Evaluate existing data, 3) Understand data acquisition opportunities and limitations, 4) Plan data acquisition and analysis, 5) Monitor, clean data, analyze and validate. An example of using this process to validate the reliability of a new washing machine design within 6 months is provided to illustrate the steps. The process aims to ensure data acquisition is disciplined and sufficient to answer reliability questions.
The document discusses applying Failure Mode and Effects Criticality Analysis (FMECA) to software engineering. It describes FMECA as a structured method to anticipate failures and their causes. The document outlines how FMECA was originally used in industries like aerospace and nuclear engineering but has expanded to other domains. It then discusses applying FMECA at different levels of a software project, from requirements to architecture to design to code. The document advocates an "enlightened approach" to using FMECA across all representations and abstractions of software.
Astr2013 tutorial by mike silverman of ops a la carte 40 years of halt, wha...ASQ Reliability Division
This document summarizes a presentation titled "40 Years of HALT: What Have We Learned?" by Mike Silverman. The presentation discusses the evolution of Highly Accelerated Life Testing (HALT) over the past 40 years, including what HALT is and is not, basic HALT methodology, links between HALT and design for reliability, new advances in HALT, current adoption rates of HALT, and the future of HALT. The presentation aims to share lessons learned from thousands of engineers who have used HALT techniques over the past 40 years to improve product design and reliability.
Comparing Individual Reliability to Population Reliability for Aging SystemsASQ Reliability Division
This document discusses the differences between individual reliability (IndRel) and population reliability (PopRel) for aging systems. IndRel provides the reliability of a single system at a given age, while PopRel provides the probability that a randomly selected system from a population will work at a given time, taking into account the age distribution of systems in the population. The document outlines methods to estimate both IndRel and PopRel, including using Weibull and probit models on failure data. Examples are provided to demonstrate estimating IndRel and PopRel for projects using different statistical models and failure data.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
2. ASQ Reliability Division
ASQ Reliability Division
English Webinar Series
English Webinar Series
One of the monthly webinars
One of the monthly webinars
on topics of interest to
reliability engineers.
To view recorded webinar (available to ASQ Reliability
Division members only) visit asq.org/reliability
) /
To sign up for the free and available to anyone live
webinars visit reliabilitycalendar.org and select English
Webinars to find links to register for upcoming events
http://reliabilitycalendar.org/The_Re
liability_Calendar/Webinars_
liability Calendar/Webinars ‐
_English/Webinars_‐_English.html
3. General Bayesian Methods for
Typical Reliability Data Analysis
Ming Li
GE Global Research
A joint work with William Q. Meeker at Iowa State University.
Webinar
ASQ Reliability Division
June 14 2012
4. Outline
• Traditional Reliability Framework
Problems / Concepts / Methods
• Bayesian Reliability Framework
Prior knowledge / Concepts / Methods
• Bayesian Reliability Examples
Weibull distribution
Accelerated life test
Repeated measure degradation
• Common Mistakes and Pitfalls
• Conclusions
2/ 2
GE Title or job number /
5/25/2012
5. Traditional Reliability Framework
Reliability Problems
Statistical Concepts
Computational Methods
3/ 3
GE Title or job number /
5/25/2012
6. Reliability Problems
Life of a product
Degradation of performance
Repairable system
Warranty
Prognostic
Service availability or guarantee
4/ 4
GE Title or job number /
5/25/2012
7. Statistical Concepts
Data
Field data, Lab data, simulated data
Failure modes, system or component level data
Exact, left, right, interval and window censored data
Model
Life distribution estimation (Weibull, Lognormal …)
Accelerated testing planning and analysis
Degradation modeling (physics + statistics)
Poisson process for repairable system
Non-parametric statistical models (e.g. Kaplan Meier)
5/ 5
GE Title or job number /
5/25/2012
8. Computational Methods
To calculate point estimates and confidence
intervals for statistical uncertainty:
Maximum likelihood method
Bootstrap re-sampling method
Nonparametric method
About methods are pure data driven, and prior knowledge is not used.
Simulation based Bayesian method
Could integrate prior knowledge or information
Solution to certain problems that are difficult to solve by
other methods (i.e. computation advantages)
6/ 6
GE Title or job number /
5/25/2012
9. Bayesian Reliability Framework
Why Not the Bayesian Method?
Prior Knowledge
Concept Illustration
Implementation through BUGs
7/ 7
GE Title or job number /
5/25/2012
10. Why Not the Bayesian Method?
• No user friendly Bayesian computer program
Engineers do not want to write MCMC
There are many non Bayesian program
ReliaSoft’s Weibull++, ALTA, etc.
JMP, Minitab, etc.
Many companies have site licenses
• Need justification of prior knowledge
Sources of prior knowledge
Management approval
Impact of biased or bad priors
8/ 8
GE Title or job number /
5/25/2012
11. Prior Knowledge
• Physics of failure mechanism
Activation energy is around 0.2ev
• Previous empirical experience
30 year experience of Weibull shape
parameter of 2.5
• Sensitivity analysis and scenario test
What if the activation energy changes to a
range (0.4,0.6)
Bayesian method combines data and prior knowledge,
big impact when data is limited.
9/ 9
GE Title or job number /
5/25/2012
12. Concept Illustration
Model for
Data
Likelihood
Posterior
Inference
Data Distribution
Bayes’
Theorem
Prior
Information
10 10
/
GE Title or job number /
5/25/2012
13. Implementation Through BUGs
http://www.openbugs.info/w/
# (1) model specification
model {
Features
} Easy to download and install
# (2) data input
Simple user interface
list() Detailed manual with a lot of examples
Many build in distributions and functions
# (3) initial value
list() Steps
list()
list() Define the statistics problem clearly
Prepare the input data accordingly
Setup reasonable initial values
Bernoulli, Binomial, Poisson … If it converges….
Check the history plot
Beta, Chi-square, Normal, Check the density plot
Gamma, Weibull, Logistic … Check BGR diagnostic plot
Multinomial, Dirichlet, Look the posterior summary statistics
Multivariate Normal, Wishart … Extract the data for each MCMC steps 11 11
/
GE Title or job number /
Mean, median and Credible intervals 5/25/2012
14. Bayesian Reliability Example
Weibull Distribution
Accelerated Life Test
Repeated Measure Degradation
12 12
/
GE Title or job number /
5/25/2012
15. Weibull Distribution for the Bearing-Cage Field Data
Data from Meeker and Escobar Example 8.16.
Data Summary
• 6 failures
• 1697 right censored
• Different censoring time
• Heavy censoring
• Weibull distribution
• Data driven MLE
Bayesian implementation
• Prior on B01 (i.e., t0.01 quantile) and weibull shape parameter
• Interested in estimate B10
13 13
/
GE Title or job number /
5/25/2012
16. Weibull Distribution
1
t t
T ~ weibull t , ,
exp t 1 exp t
Parameter of t p and sigma Parameter of original ME book
1
t p [ log(1 p)]
t p [ log(1 p)]
1
1
T ~ dweib x, v, v x v 1 exp x v
1
v Parameter used in WinBUGs
1
1
t p [ log(1 p)]
t [ log(1 p )]
p
14 14
/
GE Title or job number /
5/25/2012
17. OpenBUGs Implementation
model {
log.B01 ~ dunif(4.6051,8.5172)
B01 <- exp(log.B01) Priors
log.sigma ~ dnorm(-1.151,31.562)
sigma <- exp(log.sigma) log t0.01 ~ unif log 100 , log 5000
v <- 1/sigma log ~ dnorm mean 1.151,sd 0.178
lamda <- pow(B01,-v)*0.01005034 Informative prior: 99% of the probability of
between 0.2 and 0.5.
for (iii in 1:6){ 24 exact failures
x.exact[iii] ~ dweib(v,lamda)
}
for (jjj in 1:19){
dummy[jjj] <- 0 1697 right censored
dummy[jjj] ~ dloglik(logLike[jjj])
logLike[jjj] <- weight[jjj]*(-lamda*pow(lower[jjj],v))
observation in groups
}
}
15 15
/
GE Title or job number /
5/25/2012
18. Accelerated Life Test for Device-A
Data from Meeker and Escobar Example 19.2.
Data Summary
• Three accelerated levels
(10C, 40C, 60C, and 80C)
• Usage level 10C
• Arrhenius model for
temperature.
• Log-normal life
distribution.
Y log Hours ~ N K , 2
11605
K 0 1
K
16 16
/
GE Title or job number /
5/25/2012
19. Re-parameterization
• Replace the intercept by B01 at 40C
• It will break the strong correlation between the slope and intercept
11605
B01.40 0 1 z0.01
273 40
B01.40 11605 z
0
1
273 40
0.01
• Use informative prior for 1 such that 99% of the probability will
between 0.5 and 0.8.
• Interested in the B10 life and the usage temperature 10C (i.e. 283K)
11605 11605
K B01.40 z0.01 1
K 273 40
17 17
/
GE Title or job number /
5/25/2012
20. OpenBUGs Implementation
model {
### For Temp=60C, i.e. 11605/(273+60) = 34.849850
B01.40 ~ dgamma(0.001,0.0001) ### 11 censored observations, and 9 exact observations
b1 ~ dnorm(0.65,294.8843) ## informative prior ### 11605/(273+60) - 11605/(273+40) = -2.226827
tau ~ dgamma(0.001,0.0001) mu.60 <- B01.40 + 2.326348*sigma - b1*2.226827
sigma <- 1/sqrt(tau) for (i in 1:11){
dummy.60[i] <- 0
b0 <- B01.40 + 2.326348*sigma - b1*37.076677 dummy.60[i] ~ dloglik(logLike.60[i])
B10.10 <- mu.10 - 1.281552*sigma logLike.60[i] <- ( 1-phi((8.517193-mu.60)*sqrt(tau)) )
}
### For Temp=10C, i.e. 11605/(273+10) = 41.007067 for (j in 1:9){
### All 30 observations are censored. Y.log.60[j] ~ dnorm(mu.60,tau)
### 11605/(273+10) - 11605/(273+40) = 3.93039 }
mu.10 <- B01.40 + 2.326348*sigma + b1*3.93039
for (i in 1:30){
dummy.10[i] <- 0 ### For Temp=80C, i.e. 11605/(273+80) = 32.875354
dummy.10[i] ~ dloglik(logLike.10[i]) ### 11 censored observations, and 9 exact observations
logLike.10[i] <- ( 1-phi((8.517193-mu.10)*sqrt(tau)) ) ### 11605/(273+80) - 11605/(273+40) = -4.201323
} mu.80 <- B01.40 + 2.326348*sigma - b1*4.201323
for (i in 1:1){
### For Temp=40C, i.e. 11605/(273+40) = 37.076677 dummy.80[i] <- 0
### 90 censored observations, and 10 exact observations dummy.80[i] ~ dloglik(logLike.80[i])
### 11605/(273+40) - 11605/(273+40) = 0 logLike.80[i] <- ( 1-phi((8.517193-mu.80)*sqrt(tau)) )
mu.40 <- B01.40 + 2.326348*sigma }
for (i in 1:90){ for (j in 1:14){
dummy.40[i] <- 0 Y.log.80[j] ~ dnorm(mu.80,tau)
dummy.40[i] ~ dloglik(logLike.40[i]) }
logLike.40[i] <- ( 1-phi((8.517193-mu.40)*sqrt(tau)) )
} }
for (j in 1:10){
Y.log.40[j] ~ dnorm(mu.40,tau)
}
18 18
/
GE Title or job number /
5/25/2012
21. Repeated Measure for Device B Degradation
Data from Meeker and Escobar Example 21.1.
Data Summary
• 3 levels of temperature
• Usage temperature 80C
• ~ 10 devices per temp.
• Interval of 125 hours to
measure the degradation
• Mixed effect model
• Nonlinear path
• Normal distribution for
residuals
19 19
/
GE Title or job number /
5/25/2012
22. Model Details
yij ~ Dij ij 1 i=1,. . . ,n: index for device
ij ~ N 0, 2 j=1,…,m : index of time of observation for each device
j
Dij tij ; temp Di , 1 exp Ri 195 AF temp tij
11605 11605 In stable
AF temp exp Ea parameterization
195 273 temp 273
Di , : The asymptote for each device.
Ri 195 The reaction rate at 195C for each device
1,i
1,i log Ri 195 ~ MVN mean.β,prec.β
2,i
2,i log D ,i mean.β : 2x1 mean vector of a bivariate normal
3 Ea
prec.β : 2x2 precision matrix of a bivariate normal
sigma inv prec.β : Variance and covariance matrix 20 /
20
for the bivariate normal. or job5/25/2012
GE Title number /
23. OpenBUGs Implementation
*(1-exp(-R195[iii+7]*data[231+(iii-1)*17+jjj,2]))
model { data[231+(iii-1)*17+jjj,1] ~ dnorm(mu.195[(iii-1)*17+jjj],tau)
}
for(iii in 1:34){ }
bbb[iii,1:2] ~ dmnorm(mean.bbb[1:2],prec.bbb[1:2,1:2])
Dinf[iii] <- -exp(bbb[iii,2])
R195[iii] <- exp(bbb[iii,1]) #### Data and Model for Temp=237 ###
} #### 11605/(195+273) - 11605/(237+273) = 2.042107
#### Index shift for data is: 33*7 + 12*17 = 435
sigma[1:2,1:2] <- inverse(prec.bbb[1:2,1:2]) #### Index shift for group is: 7+12=19
mean.bbb[1:2] ~ dmnorm(M[1:2], A[1:2,1:2]) for(iii in 1:15){
prec.bbb[1:2,1:2] ~ dwish(B[1:2,1:2 ], 2) for(jjj in 1:9){
mu.237[(iii-1)*9+jjj] <- Dinf[iii+19]
b3 ~ dnorm(0.7,663.5) *(1-exp(-R195[iii+19]*exp(b3*2.042107)
tau ~ dgamma(0.001,0.001) *data[435+(iii-1)*9+jjj,2]) )
sigma.error <- 1/sqrt(tau) data[435+(iii-1)*9+jjj,1] ~ dnorm(mu.237[(iii-1)*9+jjj],tau)
}
#### Data and Model for Temp=150C ### }
#### 11605/(195+273) - 11605/(150+273) = -2.637980
for(iii in 1:7){ }
Priors
for(jjj in 1:33){
mu.150[(iii-1)*33+jjj] <- Dinf[iii]*(1-exp(-R195[iii] 0 106 0
mean.β ~ dmnorm , 6
*exp(-b3*2.637980) *data[(iii-1)*33+jjj,2]) ) 0
data[(iii-1)*33+jjj,1] ~ dnorm(mu.150[(iii-1)*33+jjj],tau) 0 10
}
} 103 0
prec.β ~ dwish
0 103
,2
#### Data and Model for Temp=195 ###
#### 11605/(195+273) - 11605/(195+273) = 0
#### Index shift for data is: 33*7=231 ~ dgamma 0.001, 0.001
#### Index shift for group is: 7
3 ~ dnorm 0.7, 663.5
for(iii in 1:12){
for(jjj in 1:17){
mu.195[(iii-1)*17+jjj] <- Dinf[iii+7] Informative prior: put 99% of the 21 21
/
probability between 0.6 and 0.8 for 35/25/2012
.
GE Title or job number /
24. Cautious and Pitfalls
• Be aware of the effect of prior selection
• Do a sensitivity analysis and compare with
non-informative priors
• Inappropriate priors for biased results
• Understand the assumptions
22 22
/
GE Title or job number /
5/25/2012
25. Conclusions
• Reliability engineers have prior knowledge for
the model parameters
• Bayesian analysis provides a formal way to
implement prior knowledge
• OpenBUGs/WinBUGs provides user-friendly
tool for Bayesian reliability analysis
• Most reliability models can be implemented
through OpenBUGs/WinBUGs
23 23
/
GE Title or job number /
5/25/2012
26. Thank you!
24 24
/
GE Title or job number /
5/25/2012
27. Zero-trick in OpenBUGs
Reason for quick convergence: The likelihood contribution for censored
observation is determined by the censoring time and use the OpenBUGs
zero-trick to include the censored observation likelihood contribution.
For Weibull right censored observation at censor time T, the likelihood is:
T
f x dx 1 f x dx
T 0
T
exp ME Book parameterization
exp T v OpenBUGs parameterization
25 25
/
GE Title or job number /
5/25/2012
28. Traditional method in OpenBUGs
• C( , ): the build-in censoring function in
OpenBUGs
• Very slow in convergence for heavy censoring!
• Reason for slow convergence: each censor
data point is treated as a random node in
OpenBUGs and a stochastic MCMC chain will
be established for each random node.
26 26
/
GE Title or job number /
5/25/2012