This document discusses risk analysis approaches for maintenance decision making at Afam electricity generation station in Nigeria. It describes using Weibull analysis on a small dataset from gas turbine 17 to forecast failure risks and prioritize maintenance. The study establishes that risk analysis is an effective tool for maintenance planning when the results are thoroughly analyzed and interpreted before decisions are made. It also discusses other risk analysis methods and factors like environmental stresses that influence equipment reliability.
Modeling results from Health Sciences dataJudson Chase
This document discusses strategies for using modeling and predictive analytics to reduce costs and improve efficiency in clinical trials. It provides examples of how modeling site activation and enrollment can help minimize delays and budget overruns. Risk-based monitoring is presented as another way to transition to more efficient monitoring. Savings projections are estimated for different therapeutic areas based on reducing clinical site monitoring costs from 30% to 3-15% of total costs. Overall, the document argues that predictive modeling approaches can help lower total costs of clinical trials through better planning, execution and oversight.
Measurement and Evaluation of Reliability, Availability and Maintainability o...IOSR Journals
The growing complexity of equipments and systems often lead to failures and as a consequence the
aspects of reliability, maintainability and availability have come into forefront. The failure of machineries and
equipments causes disruption in production resulting from a loss of availability of the system and also increases
the cost of maintenance. The present study deals with the determination of reliability and availability aspects of
one of the significant constituent in a Railway Diesel Locomotive Engine. In order to assess the availability
performance of these components, a broad set of studies has been carried out to gather accurate information at
the level of detail considered suitable to meet the availability analysis target. The Reliability analysis is
performed using the Weibull Distribution and the various data plots as well as failure rate information help in
achieving results that may be utilized in the near future by the Railway Locomotive Engines for reducing the
unexpected breakdowns and will enhance the reliability and availability of the Engine. In this work, ABC
analysis has been used for the maintenance of spare parts inventory. Here, Power pack assemblies, Engine
System are used to focus on the reliability, maintainability and availability aspects
Condition-Based Maintenance Basics by Carl Byington - PHM Design, LLCCarl Byington
Condition-based maintenance (CBM or CBM+) is a strategy of performing maintenance on a machine or system only when there is objective evidence of need or impending failure. CBM is enabled by the evolution of key technologies, including improvements in - sensors, microprocessors, digital signal processing, simulation modeling, multisensor data fusion, reliability engineering, Internet of Things (IoT) connectivity, data warehousing, cloud computing, machine learning (ML), artificial intelligence (AI), and predictive analytics. CBM involves monitoring the health or performance of a component or system and performing maintenance based on that inferred health and in some cases, predicted remaining useful life (RUL). This predictive maintenance philosophy contrasts with earlier ideologies, such as corrective maintenance — in which action is taken after a component or system fails — and preventive maintenance — which is based on event or time milestones. Each involves a cost tradeoff.
Carl Byington with PHM Design, LLC reviews some of the elements of CBM.
#phmdesign
https://phmdesign.com
1) Condition monitoring of transmission and distribution networks is important to reduce outage costs and ensure reliable electricity delivery. It helps identify equipment failures early to plan maintenance and avoid unplanned outages.
2) When selecting a condition monitoring method, utilities must balance costs of the monitoring technique against costs of missed failures and false alarms. Continuous online monitoring detects more failures but yields more false alarms than periodic monitoring.
3) A full asset management process involves setting performance standards, assessing asset condition and risks, prioritizing maintenance based on condition and risk levels, and planning work accordingly. This helps utilities optimize maintenance planning and budgets.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
CBM Cost Benefit Analysis by Carl Byington - PHM Design, LLCCarl Byington
Carl Byington with PHM Design, LLC reviews some of the elements of CBM Cost Benefit Analysis. The analysis consider implementation and non recurring engineering cost as well as deferred, eliminated scheduled maintenance, reduced unscheduled maintenance, and operational cost savings drivers. Specific examples from aircraft, ground vehicle, and industrial applications are provided.
#phmdesign
https://phmdesign.com
CBM Requirements by Carl Byington - PHM Design, LLCCarl Byington
Carl Byington with PHM Design, LLC reviews:
Conceptual functional architecture:
Describes functions and functional interactions
Traces functions to capabilities or services desired in the COO
Conceptual physical architecture:
Allocates and describes the conceptual implementation of functions
Traces implementation to function
Activity Flows:
Identifies primary paths through the principal use-cases to meet the goals and interests of the stakeholders
Trades identify preferred path which, in turn, provides context for requirements derivation and operational thread development.
#phmdesign
https://phmdesign.com
Prognostic Health Management (PHM) uses health monitoring and prognostics to predict product failures by assessing degradation from normal operating conditions. Traditional reliability predictions are inaccurate, while PHM is more suitable as it considers actual usage. Research is being conducted to improve PHM models, sensors, communication and decision making to make reliability predictions more realistic. PHM is expected to become a cost-effective solution for predicting electronics reliability due to increasing electronics usage and demand for more reliable products.
Modeling results from Health Sciences dataJudson Chase
This document discusses strategies for using modeling and predictive analytics to reduce costs and improve efficiency in clinical trials. It provides examples of how modeling site activation and enrollment can help minimize delays and budget overruns. Risk-based monitoring is presented as another way to transition to more efficient monitoring. Savings projections are estimated for different therapeutic areas based on reducing clinical site monitoring costs from 30% to 3-15% of total costs. Overall, the document argues that predictive modeling approaches can help lower total costs of clinical trials through better planning, execution and oversight.
Measurement and Evaluation of Reliability, Availability and Maintainability o...IOSR Journals
The growing complexity of equipments and systems often lead to failures and as a consequence the
aspects of reliability, maintainability and availability have come into forefront. The failure of machineries and
equipments causes disruption in production resulting from a loss of availability of the system and also increases
the cost of maintenance. The present study deals with the determination of reliability and availability aspects of
one of the significant constituent in a Railway Diesel Locomotive Engine. In order to assess the availability
performance of these components, a broad set of studies has been carried out to gather accurate information at
the level of detail considered suitable to meet the availability analysis target. The Reliability analysis is
performed using the Weibull Distribution and the various data plots as well as failure rate information help in
achieving results that may be utilized in the near future by the Railway Locomotive Engines for reducing the
unexpected breakdowns and will enhance the reliability and availability of the Engine. In this work, ABC
analysis has been used for the maintenance of spare parts inventory. Here, Power pack assemblies, Engine
System are used to focus on the reliability, maintainability and availability aspects
Condition-Based Maintenance Basics by Carl Byington - PHM Design, LLCCarl Byington
Condition-based maintenance (CBM or CBM+) is a strategy of performing maintenance on a machine or system only when there is objective evidence of need or impending failure. CBM is enabled by the evolution of key technologies, including improvements in - sensors, microprocessors, digital signal processing, simulation modeling, multisensor data fusion, reliability engineering, Internet of Things (IoT) connectivity, data warehousing, cloud computing, machine learning (ML), artificial intelligence (AI), and predictive analytics. CBM involves monitoring the health or performance of a component or system and performing maintenance based on that inferred health and in some cases, predicted remaining useful life (RUL). This predictive maintenance philosophy contrasts with earlier ideologies, such as corrective maintenance — in which action is taken after a component or system fails — and preventive maintenance — which is based on event or time milestones. Each involves a cost tradeoff.
Carl Byington with PHM Design, LLC reviews some of the elements of CBM.
#phmdesign
https://phmdesign.com
1) Condition monitoring of transmission and distribution networks is important to reduce outage costs and ensure reliable electricity delivery. It helps identify equipment failures early to plan maintenance and avoid unplanned outages.
2) When selecting a condition monitoring method, utilities must balance costs of the monitoring technique against costs of missed failures and false alarms. Continuous online monitoring detects more failures but yields more false alarms than periodic monitoring.
3) A full asset management process involves setting performance standards, assessing asset condition and risks, prioritizing maintenance based on condition and risk levels, and planning work accordingly. This helps utilities optimize maintenance planning and budgets.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
CBM Cost Benefit Analysis by Carl Byington - PHM Design, LLCCarl Byington
Carl Byington with PHM Design, LLC reviews some of the elements of CBM Cost Benefit Analysis. The analysis consider implementation and non recurring engineering cost as well as deferred, eliminated scheduled maintenance, reduced unscheduled maintenance, and operational cost savings drivers. Specific examples from aircraft, ground vehicle, and industrial applications are provided.
#phmdesign
https://phmdesign.com
CBM Requirements by Carl Byington - PHM Design, LLCCarl Byington
Carl Byington with PHM Design, LLC reviews:
Conceptual functional architecture:
Describes functions and functional interactions
Traces functions to capabilities or services desired in the COO
Conceptual physical architecture:
Allocates and describes the conceptual implementation of functions
Traces implementation to function
Activity Flows:
Identifies primary paths through the principal use-cases to meet the goals and interests of the stakeholders
Trades identify preferred path which, in turn, provides context for requirements derivation and operational thread development.
#phmdesign
https://phmdesign.com
Prognostic Health Management (PHM) uses health monitoring and prognostics to predict product failures by assessing degradation from normal operating conditions. Traditional reliability predictions are inaccurate, while PHM is more suitable as it considers actual usage. Research is being conducted to improve PHM models, sensors, communication and decision making to make reliability predictions more realistic. PHM is expected to become a cost-effective solution for predicting electronics reliability due to increasing electronics usage and demand for more reliable products.
This document provides an overview of using neural network techniques in power systems. It discusses how neural networks have been applied to areas like fault diagnosis, security assessment, load forecasting, economic dispatch, and harmonic analysis. The number of published papers in these areas has grown significantly from 1990-1996 to 2000-2005, particularly in load forecasting, fault diagnosis/location, economic dispatch, security assessment, and transient stability. The document then reviews in more detail how neural networks have been applied to load forecasting, fault diagnosis/location, and economic dispatch problems in the power industry.
This document discusses using statistical analysis of outage data to plan asset maintenance in electric power distribution networks. It describes collecting outage data from distribution components like lines, cables, breakers and transformers. The data is processed and analyzed using statistical tests to identify critical components affecting system reliability. The results show maintenance decisions should be based on analyzed outage data to identify weak components for targeted maintenance. This improves reliability and reduces costs compared to preventative or reactive maintenance.
This document proposes an extended risk-based monitoring model for clinical trials that incorporates on-demand, query-driven source data verification. The model aims to make monitoring more efficient by focusing source data verification efforts on resolving queries rather than routine checking. Simulation results suggest the model could reduce monitoring costs by 3-35% depending on study size and therapeutic area. Key aspects of the proposed model include distinguishing between data point and site-level monitoring, incorporating data validation and statistical surveillance earlier in the process, and prioritizing non-source data verification activities at higher risk sites over increased source data checking.
The document discusses several issues related to implementing condition-based maintenance (CBM) and prognostics and health management (PHM) programs, including:
1) Performing a thorough risk assessment using techniques like FMECA is important to understand how a system can fail and inform sensor placement and diagnostic rule development.
2) Model-based failure analysis considering failure dependencies is better than spreadsheet-based FMECA for knowledge retention and risk assessment.
3) Clear definitions of failure concepts and taxonomies are needed to improve understanding of risk assessments.
4) Diagnostic rules and sensor selection should be based on dependencies between failure modes revealed through risk assessments.
Concepts on Models to Measure Organizational Readiness for DisasterDavid Merrick II
The document discusses concepts in measuring organizational readiness for disasters. It begins by introducing the Center for Disaster Risk Policy at Florida State University and its focus areas. It then reviews past efforts to measure readiness like the Simpson Preparedness Study and limitations of standards like NFPA 1600. A new concept for measuring readiness is proposed that uses Readiness Dimensions specific to hazards, locations, organizational units, and probability to calculate Hazard Readiness, Category Readiness, and Total Readiness scores. The model allows organizations to identify priority areas and formulate preparedness scenarios. Future research plans involve building and testing this model concept.
This document discusses approaches to visualizing uncertainties for decision makers using an operational decision support system (DSS) called RODOS. In the early phase of an emergency when source term and weather data are uncertain, the DSS can use ensembles to show probability bands of potential dose exceedance. In later phases when countermeasures are considered, the DSS can use multi-criteria decision analysis and sensitivity analysis to help evaluate strategies while accounting for both quantitative and qualitative factors. Visualizing results as percentiles may help communicate uncertainties to decision makers.
CONDITION-BASED MAINTENANCE USING SENSOR ARRAYS AND TELEMATICSijmnct
Emergence of uniquely addressable embeddable devices has raised the bar on Telematics capabilities.
Though the technology itself is not new, its application has been quite limited until now. Sensor based
telematics technologies generate volumes of data that are orders of magnitude larger than what operators
have dealt with previously. Real-time big data computation capabilities have opened the flood gates for
creating new predictive analytics capabilities into an otherwise simple data log systems, enabling real-time
control and monitoring to take preventive action in case of any anomalies. Condition-based-maintenance,
usage-based-insurance, smart metering and demand-based load generation etc. are some of the predictive analytics use cases for Telematics. This paper presents the approach of condition-based maintenance using
real-time sensor monitoring, Telematics and predictive data analytics.
This document presents MedSafe, a framework for automatically classifying medical device recalls reported to the FDA as either computer-related or non-computer-related. MedSafe uses natural language processing and machine learning techniques to analyze the unstructured text descriptions of recalls. It was evaluated on over 16,000 recall records from 2007-2013 and achieved 97.3% accuracy in determining the total number of devices recalled and up to 95.8% accuracy in classifying recalls into computer-related vs. non-computer-related categories. The results show that computer-related recalls have increased over time and primarily involve devices in radiology, cardiology, and defibrillators.
This document summarizes business continuity risk assessment approaches from three major financial institutions:
1. Morgan Stanley prioritizes business risks into three categories based on the level of mitigation and response solutions in place. Risk analysis is performed at the regional and divisional levels.
2. Citigroup conducts a Threat and Vulnerability Assessment (TVA) to identify risks and their probabilities and impacts. A cross-functional team evaluates factors like security, facilities, technology, and human resources.
3. Credit Suisse First Boston (CSFB) prioritizes processes into three tiers based on their criticality. It considers both the business importance of locations and threats associated with them to weigh risks. CSF
This document discusses risk management in logistics and supply chains. It defines risk as the possibility of harm or loss, and risk management as reducing risk impacts. Effective risk management is important as companies increasingly rely on globalized, outsourced supply chains prone to disruptions. The risk management process involves identifying internal and external risks, analyzing them, developing treatment strategies like avoidance or mitigation, and continually monitoring risks and treatments. Supply chain risks can occur at suppliers, distribution, and internally. Ongoing risk management is needed to reduce costs and threats over time as risks evolve with regulatory environments.
Energy Management Systems: Recommendations for decision makersGimélec
The document discusses implementing an Energy Management System (EMS) to improve energy efficiency in buildings. An EMS allows continuous tracking of energy consumption and costs. It collects and analyzes consumption data to monitor performance, identify savings opportunities, and support decision making. The EMS helps reduce energy expenditures by 5-15% by raising stakeholder awareness and uniting teams around continual improvement. Implementing an EMS is an important part of an energy management program and improves management at all stages from diagnosis to operations.
This document discusses risk-based decision making and provides examples from automobile insurance. It explains that risk is calculated as the probability of failure multiplied by the consequence. Important factors in risk-based decisions are understanding your risk tolerance, estimating probability of failure, and evaluating consequences. The document provides guidance on various risk-based inspection standards and resources from organizations like ASME and API. It emphasizes using both qualitative and quantitative analysis, with common sense prevailing over strict mathematics.
This document provides guidance for developing effective IT contingency plans. It outlines a seven-step contingency planning process that includes developing policy, conducting business impact analysis, identifying preventive controls and recovery strategies, developing and testing a contingency plan, and maintaining the plan. It also discusses considerations for contingency planning for different types of IT systems like desktops, servers, web sites, networks and mainframes. The goal is to help organizations establish thorough plans and procedures to enable quick and effective IT system recovery following a disruption.
FAULT DIAGNOSIS USING CLUSTERING. WHAT STATISTICAL TEST TO USE FOR HYPOTHESIS...JaresJournal
Predictive maintenance and condition-based monitoring systems have seen significant prominence in
recent years to minimize the impact of machine downtime on production and its costs. Predictive
maintenance involves using concepts of data mining, statistics, and machine learning to build models that
are capable of performing early fault detection, diagnosing the faults and predicting the time to failure.
Fault diagnosis has been one of the core areas where the actual failure mode of the machine is identified.
In fluctuating environments such as manufacturing, clustering techniques have proved to be more reliable
compared to supervised learning methods. One of the fundamental challenges of clustering is developing a
test hypothesis and choosing an appropriate statistical test for hypothesis testing. Most statistical analyses
use some underlying assumptions of the data which most real-world data is incapable of satisfying those
assumptions. This paper is dedicated to overcoming the following challenge by developing a test hypothesis
for fault diagnosis application using clustering technique and performing PERMANOVA test for hypothesis
testing.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Disaster Recovery planning within HIPAA frameworkDavid Sweigert
This document provides guidance on developing contingency plans to address critical business processes that support HIPAA transactions. It defines key terms like contingency planning, disaster recovery planning, and continuity of operations plans. It discusses performing a risk analysis to identify critical processes and potential failures. Alternatives and workarounds are identified for different scenarios. The document provides guidance on developing a continuity of operations plan, including identifying triggers, response teams, procedures, training, and updating the plan over time. It emphasizes the importance of testing contingency plans periodically.
Reducing Product Development Risk with Reliability Engineering MethodsWilde Analysis Ltd.
Overview of how reliability engineering methodology and software tools can help companies manage risk during product development and improve performance.
Presented at the Interplas'2011 exhibition and conference at the NEC on 27th October 2011 by Mike McCarthy.
This presentation looks at how ‘Reliability Engineering’ tools and methods are used to reduce risk in a typical product development lifecycle involving both plastic and metallic components. These tools range in complexity from simple approaches to managing product reliability data to the application of sophisticated simulation methods on large systems with complex duty cycles. Three examples are:
- Failure Mode Effects (and Criticality) Analysis (FMECA) to identify, manage and reuse information on what could go wrong with a design or manufacturing process and how to avoid it
- Design of Experiments for optimising performance through a structured and efficient study of parameters that affect the product or manufacturing process (e.g. injection moulding)
- Accelerated Life Testing to identify potential long term failure modes of products released to market within a shortened development time.
We will explore how gathering enough of the right kind of data and applying it in an intelligent way can reduce risk, not only in plastic product design and manufacture, but also in managing the associated supply chain and in the ‘Whole Life Management’ of products (including warranties). Furthermore, we will show how ‘sparse’ data gathered from previous or similar products, such as field/warranty reports, engineering testing data and supplier data sheets, as well as FEA, CFD and injection moulding/extrusion simulation, can inform and positively influence new product design processes from concept stage onwards.
Power System Operational Resilience – What it means and where we standPower System Operation
Electric power system industry is becoming increasingly
aware of the potential adverse impact of extreme events
and physical and cybersecurity attacks on the power
system operations. The High Impact, Low Frequency
(HILF) events and increased frequency of system
disturbances caused by natural phenomena (hurricanes,
earthquakes, etc.) results in a shift of focus of the energy
industry from purely developing preventive measures,
towards providing and enhancing resilience of the power
system following these major disturbances.
In power system operations, resilience generally means
the ability to respond quickly and to recover from
a disruption. To enhance system resilience, various
strategies from the provision of sophisticated operation
and control capabilities to preparing for the effective and
prudent operations can be considered.
Risk Analysis in Aviation: the Forensic Point of ViewAntonio Musto
The document discusses combining Failure Mode, Effects and Criticality Analysis (FMECA) and Event Risk Classification (ERC) to continuously update risk analysis for aircraft insurance. FMECA is initially used to estimate economic risks based on design data, while lower-cost ERC of operational events identifies highest risks to focus FMECA updates. This allows accurate risk assessment with reduced data costs for insurers and airlines through cooperation.
Recommendations for Preventive Maintenance - A Machine Learning ProjectPranov Mishra
A business problem of finding a method to reduce time wasted in the manufacturing unit due to machines breaking down was solved by building a decision tree model. CART algorithm was used for the purpose. High level details are below:
A thorough analysis was done to identify if there are ways of knowing which machines have higher probabilities of breaking down. The ultimate goal of the management is to improve the productivity of the company by ensuring minimum or no stoppage of work at any point of time.
The idea of reviewing the data is to come up with a implementable framework and establish protocols which will enable visibility of machine health status and proactively take remedial steps before an actual breakdown. Post analysis the summary and recommendations are given below:
Machines delivered by Provider3 breakdown much earlier, as early as at 60 months. Management needs to have discussions around, if they should continue with Provider3 and/or initiate discussions with them to get them to improve their quality of delivered products.
In the interim, mandate monthly review of all Provider 3 machines aged more than 60 months.
Mandate monthly review of all machines older than 72.5 months that are provided by providers 1,2 and 4.
Essentially all machines older than 72.5 months will need monthly preventative maintenance review.
Substation earthing system design optimisation through the application of qua...Power System Operation
Introduction
A new safety paradigm is evolving, driven by Work
Health & Safety legislation and the explicit requirement
to demonstrate due diligence in managing risk
imposed upon staff and the public. Power system
asset owners are increasingly being required to
demonstrate compliance with the ISO 31000 risk
management standard, which requires reduction
of residual risk to as low as reasonably practicable
(ALARP). Thus, standards committees and asset
owners alike are being required to redevelop existing
IRJET- Analysis of Risk Management in Construction Sector using Fault Tree...IRJET Journal
This document analyzes risk management in the construction sector using Fault Tree Analysis and Failure Mode Effects Analysis. It aims to understand risk factors for building projects. These two methods are used to analyze risks without shortcomings. Fault Tree Analysis uses a top-down approach and Boolean logic to identify how systems can fail. Failure Mode Effects Analysis is a bottom-up technique to identify and prioritize potential failures before they occur. The results recommend more standardization in construction contracts to address issues like roles, risks, and payments. Overall, the study suggests that effective risk management should be properly applied in the construction industry.
This document provides an overview of using neural network techniques in power systems. It discusses how neural networks have been applied to areas like fault diagnosis, security assessment, load forecasting, economic dispatch, and harmonic analysis. The number of published papers in these areas has grown significantly from 1990-1996 to 2000-2005, particularly in load forecasting, fault diagnosis/location, economic dispatch, security assessment, and transient stability. The document then reviews in more detail how neural networks have been applied to load forecasting, fault diagnosis/location, and economic dispatch problems in the power industry.
This document discusses using statistical analysis of outage data to plan asset maintenance in electric power distribution networks. It describes collecting outage data from distribution components like lines, cables, breakers and transformers. The data is processed and analyzed using statistical tests to identify critical components affecting system reliability. The results show maintenance decisions should be based on analyzed outage data to identify weak components for targeted maintenance. This improves reliability and reduces costs compared to preventative or reactive maintenance.
This document proposes an extended risk-based monitoring model for clinical trials that incorporates on-demand, query-driven source data verification. The model aims to make monitoring more efficient by focusing source data verification efforts on resolving queries rather than routine checking. Simulation results suggest the model could reduce monitoring costs by 3-35% depending on study size and therapeutic area. Key aspects of the proposed model include distinguishing between data point and site-level monitoring, incorporating data validation and statistical surveillance earlier in the process, and prioritizing non-source data verification activities at higher risk sites over increased source data checking.
The document discusses several issues related to implementing condition-based maintenance (CBM) and prognostics and health management (PHM) programs, including:
1) Performing a thorough risk assessment using techniques like FMECA is important to understand how a system can fail and inform sensor placement and diagnostic rule development.
2) Model-based failure analysis considering failure dependencies is better than spreadsheet-based FMECA for knowledge retention and risk assessment.
3) Clear definitions of failure concepts and taxonomies are needed to improve understanding of risk assessments.
4) Diagnostic rules and sensor selection should be based on dependencies between failure modes revealed through risk assessments.
Concepts on Models to Measure Organizational Readiness for DisasterDavid Merrick II
The document discusses concepts in measuring organizational readiness for disasters. It begins by introducing the Center for Disaster Risk Policy at Florida State University and its focus areas. It then reviews past efforts to measure readiness like the Simpson Preparedness Study and limitations of standards like NFPA 1600. A new concept for measuring readiness is proposed that uses Readiness Dimensions specific to hazards, locations, organizational units, and probability to calculate Hazard Readiness, Category Readiness, and Total Readiness scores. The model allows organizations to identify priority areas and formulate preparedness scenarios. Future research plans involve building and testing this model concept.
This document discusses approaches to visualizing uncertainties for decision makers using an operational decision support system (DSS) called RODOS. In the early phase of an emergency when source term and weather data are uncertain, the DSS can use ensembles to show probability bands of potential dose exceedance. In later phases when countermeasures are considered, the DSS can use multi-criteria decision analysis and sensitivity analysis to help evaluate strategies while accounting for both quantitative and qualitative factors. Visualizing results as percentiles may help communicate uncertainties to decision makers.
CONDITION-BASED MAINTENANCE USING SENSOR ARRAYS AND TELEMATICSijmnct
Emergence of uniquely addressable embeddable devices has raised the bar on Telematics capabilities.
Though the technology itself is not new, its application has been quite limited until now. Sensor based
telematics technologies generate volumes of data that are orders of magnitude larger than what operators
have dealt with previously. Real-time big data computation capabilities have opened the flood gates for
creating new predictive analytics capabilities into an otherwise simple data log systems, enabling real-time
control and monitoring to take preventive action in case of any anomalies. Condition-based-maintenance,
usage-based-insurance, smart metering and demand-based load generation etc. are some of the predictive analytics use cases for Telematics. This paper presents the approach of condition-based maintenance using
real-time sensor monitoring, Telematics and predictive data analytics.
This document presents MedSafe, a framework for automatically classifying medical device recalls reported to the FDA as either computer-related or non-computer-related. MedSafe uses natural language processing and machine learning techniques to analyze the unstructured text descriptions of recalls. It was evaluated on over 16,000 recall records from 2007-2013 and achieved 97.3% accuracy in determining the total number of devices recalled and up to 95.8% accuracy in classifying recalls into computer-related vs. non-computer-related categories. The results show that computer-related recalls have increased over time and primarily involve devices in radiology, cardiology, and defibrillators.
This document summarizes business continuity risk assessment approaches from three major financial institutions:
1. Morgan Stanley prioritizes business risks into three categories based on the level of mitigation and response solutions in place. Risk analysis is performed at the regional and divisional levels.
2. Citigroup conducts a Threat and Vulnerability Assessment (TVA) to identify risks and their probabilities and impacts. A cross-functional team evaluates factors like security, facilities, technology, and human resources.
3. Credit Suisse First Boston (CSFB) prioritizes processes into three tiers based on their criticality. It considers both the business importance of locations and threats associated with them to weigh risks. CSF
This document discusses risk management in logistics and supply chains. It defines risk as the possibility of harm or loss, and risk management as reducing risk impacts. Effective risk management is important as companies increasingly rely on globalized, outsourced supply chains prone to disruptions. The risk management process involves identifying internal and external risks, analyzing them, developing treatment strategies like avoidance or mitigation, and continually monitoring risks and treatments. Supply chain risks can occur at suppliers, distribution, and internally. Ongoing risk management is needed to reduce costs and threats over time as risks evolve with regulatory environments.
Energy Management Systems: Recommendations for decision makersGimélec
The document discusses implementing an Energy Management System (EMS) to improve energy efficiency in buildings. An EMS allows continuous tracking of energy consumption and costs. It collects and analyzes consumption data to monitor performance, identify savings opportunities, and support decision making. The EMS helps reduce energy expenditures by 5-15% by raising stakeholder awareness and uniting teams around continual improvement. Implementing an EMS is an important part of an energy management program and improves management at all stages from diagnosis to operations.
This document discusses risk-based decision making and provides examples from automobile insurance. It explains that risk is calculated as the probability of failure multiplied by the consequence. Important factors in risk-based decisions are understanding your risk tolerance, estimating probability of failure, and evaluating consequences. The document provides guidance on various risk-based inspection standards and resources from organizations like ASME and API. It emphasizes using both qualitative and quantitative analysis, with common sense prevailing over strict mathematics.
This document provides guidance for developing effective IT contingency plans. It outlines a seven-step contingency planning process that includes developing policy, conducting business impact analysis, identifying preventive controls and recovery strategies, developing and testing a contingency plan, and maintaining the plan. It also discusses considerations for contingency planning for different types of IT systems like desktops, servers, web sites, networks and mainframes. The goal is to help organizations establish thorough plans and procedures to enable quick and effective IT system recovery following a disruption.
FAULT DIAGNOSIS USING CLUSTERING. WHAT STATISTICAL TEST TO USE FOR HYPOTHESIS...JaresJournal
Predictive maintenance and condition-based monitoring systems have seen significant prominence in
recent years to minimize the impact of machine downtime on production and its costs. Predictive
maintenance involves using concepts of data mining, statistics, and machine learning to build models that
are capable of performing early fault detection, diagnosing the faults and predicting the time to failure.
Fault diagnosis has been one of the core areas where the actual failure mode of the machine is identified.
In fluctuating environments such as manufacturing, clustering techniques have proved to be more reliable
compared to supervised learning methods. One of the fundamental challenges of clustering is developing a
test hypothesis and choosing an appropriate statistical test for hypothesis testing. Most statistical analyses
use some underlying assumptions of the data which most real-world data is incapable of satisfying those
assumptions. This paper is dedicated to overcoming the following challenge by developing a test hypothesis
for fault diagnosis application using clustering technique and performing PERMANOVA test for hypothesis
testing.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Disaster Recovery planning within HIPAA frameworkDavid Sweigert
This document provides guidance on developing contingency plans to address critical business processes that support HIPAA transactions. It defines key terms like contingency planning, disaster recovery planning, and continuity of operations plans. It discusses performing a risk analysis to identify critical processes and potential failures. Alternatives and workarounds are identified for different scenarios. The document provides guidance on developing a continuity of operations plan, including identifying triggers, response teams, procedures, training, and updating the plan over time. It emphasizes the importance of testing contingency plans periodically.
Reducing Product Development Risk with Reliability Engineering MethodsWilde Analysis Ltd.
Overview of how reliability engineering methodology and software tools can help companies manage risk during product development and improve performance.
Presented at the Interplas'2011 exhibition and conference at the NEC on 27th October 2011 by Mike McCarthy.
This presentation looks at how ‘Reliability Engineering’ tools and methods are used to reduce risk in a typical product development lifecycle involving both plastic and metallic components. These tools range in complexity from simple approaches to managing product reliability data to the application of sophisticated simulation methods on large systems with complex duty cycles. Three examples are:
- Failure Mode Effects (and Criticality) Analysis (FMECA) to identify, manage and reuse information on what could go wrong with a design or manufacturing process and how to avoid it
- Design of Experiments for optimising performance through a structured and efficient study of parameters that affect the product or manufacturing process (e.g. injection moulding)
- Accelerated Life Testing to identify potential long term failure modes of products released to market within a shortened development time.
We will explore how gathering enough of the right kind of data and applying it in an intelligent way can reduce risk, not only in plastic product design and manufacture, but also in managing the associated supply chain and in the ‘Whole Life Management’ of products (including warranties). Furthermore, we will show how ‘sparse’ data gathered from previous or similar products, such as field/warranty reports, engineering testing data and supplier data sheets, as well as FEA, CFD and injection moulding/extrusion simulation, can inform and positively influence new product design processes from concept stage onwards.
Power System Operational Resilience – What it means and where we standPower System Operation
Electric power system industry is becoming increasingly
aware of the potential adverse impact of extreme events
and physical and cybersecurity attacks on the power
system operations. The High Impact, Low Frequency
(HILF) events and increased frequency of system
disturbances caused by natural phenomena (hurricanes,
earthquakes, etc.) results in a shift of focus of the energy
industry from purely developing preventive measures,
towards providing and enhancing resilience of the power
system following these major disturbances.
In power system operations, resilience generally means
the ability to respond quickly and to recover from
a disruption. To enhance system resilience, various
strategies from the provision of sophisticated operation
and control capabilities to preparing for the effective and
prudent operations can be considered.
Risk Analysis in Aviation: the Forensic Point of ViewAntonio Musto
The document discusses combining Failure Mode, Effects and Criticality Analysis (FMECA) and Event Risk Classification (ERC) to continuously update risk analysis for aircraft insurance. FMECA is initially used to estimate economic risks based on design data, while lower-cost ERC of operational events identifies highest risks to focus FMECA updates. This allows accurate risk assessment with reduced data costs for insurers and airlines through cooperation.
Recommendations for Preventive Maintenance - A Machine Learning ProjectPranov Mishra
A business problem of finding a method to reduce time wasted in the manufacturing unit due to machines breaking down was solved by building a decision tree model. CART algorithm was used for the purpose. High level details are below:
A thorough analysis was done to identify if there are ways of knowing which machines have higher probabilities of breaking down. The ultimate goal of the management is to improve the productivity of the company by ensuring minimum or no stoppage of work at any point of time.
The idea of reviewing the data is to come up with a implementable framework and establish protocols which will enable visibility of machine health status and proactively take remedial steps before an actual breakdown. Post analysis the summary and recommendations are given below:
Machines delivered by Provider3 breakdown much earlier, as early as at 60 months. Management needs to have discussions around, if they should continue with Provider3 and/or initiate discussions with them to get them to improve their quality of delivered products.
In the interim, mandate monthly review of all Provider 3 machines aged more than 60 months.
Mandate monthly review of all machines older than 72.5 months that are provided by providers 1,2 and 4.
Essentially all machines older than 72.5 months will need monthly preventative maintenance review.
Substation earthing system design optimisation through the application of qua...Power System Operation
Introduction
A new safety paradigm is evolving, driven by Work
Health & Safety legislation and the explicit requirement
to demonstrate due diligence in managing risk
imposed upon staff and the public. Power system
asset owners are increasingly being required to
demonstrate compliance with the ISO 31000 risk
management standard, which requires reduction
of residual risk to as low as reasonably practicable
(ALARP). Thus, standards committees and asset
owners alike are being required to redevelop existing
IRJET- Analysis of Risk Management in Construction Sector using Fault Tree...IRJET Journal
This document analyzes risk management in the construction sector using Fault Tree Analysis and Failure Mode Effects Analysis. It aims to understand risk factors for building projects. These two methods are used to analyze risks without shortcomings. Fault Tree Analysis uses a top-down approach and Boolean logic to identify how systems can fail. Failure Mode Effects Analysis is a bottom-up technique to identify and prioritize potential failures before they occur. The results recommend more standardization in construction contracts to address issues like roles, risks, and payments. Overall, the study suggests that effective risk management should be properly applied in the construction industry.
Predictive maintenance framework for assessing health state of centrifugal pumpsIAESIJAI
Combined with advances in sensing technologies and big data analytics, critical information can be extracted from continuous production processes for predicting the health state of equipment and safeguarding upcoming failures. This research presents a methodology for applying predictive maintenance (PdM) solutions and showcases a PdM application for health state prediction and condition monitoring, increasing the safety and productivity of centrifugal pumps for a sustainable and resilient PdM ecosystem. Measurements depicting the healthy and maintenance-prone stages of two centrifugal pumps were collected on the university campus. The dataset consists of 5,118 records and includes both running and standstill values. Additionally, Spearman statistical analysis was conducted to measure the correlation of collected measurements with the predicted output of machine conditions and select the most appropriate features for model optimization. Several machine learning (ML) algorithms, namely random forest (RF), Naïve Bayes, support vector machines (SVM), and extreme gradient boosting (XGBoost) were analyzed and evaluated during the data mining process. The results indicated the effectiveness and efficiency of XGBoost for the health state prediction of centrifugal pumps. The contribution of this research is to propose an effective framework collectong multistage health data for PdM applications and showcase its effectiveness in a real-world use case.
Methods for Risk Management of Mining Excavator through FMEA and FMECAtheijes
Management of maintenance systems in the mining industry is an important condition for their operation. If we recognize the need for risk analysis and management of individual maintenance system, it can generate potential overall efficiency and effectiveness. Special importance for the realization of the objectives of the mining industry belongs to redesign, system harmonization between the various technical structure, standardization, technical diagnostics, analysis of different levels of criticality with variant selection and application of optimal solutions. The potential for the destruction of the complex maintenance systems are a reality in the mining industry and their expression in various applications is a realistic one. For different aspects of the analysis it is possible to decrease risk index range from the threshold to the range of high and low threshold of moderate and acceptable risk. Application of FMEA (Failure Mode and Effect Analysis) and FMECA (Failure Modes, Effects and Criticality Analysis) methods are used to manage risk related to the initial phase of defining the prediction of all possible risks, risk factors and RPN budget priorities. Some risks can be grouped according to the type of errors that occur due to their realization. For effective risk analysis and implement measures to reduce their need is and competent team. No matter what the risks involved, FMECA method can reliably estimate the possibility of their implementation with a satisfactory degree of flexibility and compatibility. In this paper an attempt has been made to develop an effective maintenance methodology of excavator such that the maintenance cost is minimized and technical constraints (such as engine, hydraulic and transmission system, break system, electrical and safety system, suspension and track) are efficiently monitored and maintained. These technical constraints depends upon many factors such as a) Geotechnical parameters, b) Geological parameters, c) Mine parameters, d) Production rate, e) Equipment specification and f) Dig ability assessment etc. Based on the above factors maintenance plans are prepared. This paper discusses a risk management strategy system for Optimal Maintenance Program (OMP) of excavators. The OMP includes functional analysis method of FMEA and FMECA. To develop a successful operation system, it is first necessary to create a risk management program. A prudent management program is one that ensures safety and is environmentally and economically responsible.
Reliability, availability, maintainability (RAM) study, on reciprocating comp...John Kingsley
What is needed to perform a RAM Study and more details #RAM #Training #iFluids #RAMstudy
.
To know more, on How iFluids can help you operate & maintain Safe and Reliable plant Contact us Today --> info@ifluids.com
For any training enquiries, contact us today --> training@ifluids.com
For many manufacturers, evaluating and managing the risk of obsolescence is a missing piece of their overall management strategy, an oversight that can have significant implications in terms of business continuity. With a clear obsolescence policy and risk-assessment framework, manufacturing companies can help ensure that their systems and assets remain up and running, supported by a continuous risk-mitigation cycle.
This document discusses risk assessment and management for quarries. It outlines the objectives of risk assessment, defines risk management principles, and describes various risk assessment methodologies including qualitative, quantitative, failure modes and effects analysis, and hazard and operability studies. The stages of risk management are identified as hazard identification, risk evaluation, and risk control. UK health and safety legislation requires employers to conduct suitable and efficient risk assessments to identify necessary risk control measures.
This document provides definitions and information related to risk analysis. It defines key terms like hazard, risk, risk analysis, risk assessment, and reliability. It discusses various quantitative and qualitative methods for risk analysis including fault tree analysis, failure mode and effects analysis, and hazard and operability studies. Failure rate data for some process components is also presented. The document provides an overview of important concepts in quantitative risk analysis including reliability, mean time between failures, and interaction between equipment for series and parallel systems. Overall it serves as a reference on the topic of risk analysis, defining key terms and outlining various approaches.
This document discusses performance standards for safety critical equipment on offshore oil and gas drilling units. It defines performance standards as documents that link safety cases to preventative maintenance tasks by establishing acceptance criteria and critical operating parameters. Performance standards help reduce risks by monitoring asset integrity and ensuring safety systems function properly. Regulatory agencies now require performance standards to improve safety. The document provides examples of how performance standards specify maintenance controls and allow equipment performance to be measured and tracked over time.
The document discusses practical applications of system safety and risk management. It covers topics like developing safe operational plans using mission analysis, principles of operational risk management, risk assessment models like SPE and GAR, and the risk management process. Assessment tools like the consequence/probability matrix and hazard risk matrix are introduced. Effective communication, supervision, and planning are emphasized as important elements in managing risk.
Report Information from ProQuestJuly 19 2019 1515 .docxaudeleypearl
Report Information from ProQuest
July 19 2019 15:15
Document 1 of 1
On-Line Maintenance
Huffman, Ken . Nuclear Plant Journal ; Glen Ellyn Vol. 28, Iss. 2, (Mar/Apr 2010): 20,22-23.
ProQuest document link
ABSTRACT
On-line maintenance and risk-informed initiatives in general, have played a large part in the confidence that
underpins the "nuclear renaissance" in the United States. As of March 2010, U.S. utilities and other developers had
submitted applications for 28 new nuclear units to the Nuclear Regulatory Commission. The plant designs these
applications are based on, informed by U.S. operating experience, are expected to benefit from risk-informed
applications such as online maintenance.
FULL TEXT
Introduction
On-line maintenance refers to maintenance performed while the main electric generator is connected to the grid.
Nuclear power plants can realize many benefits from performing maintenance activities during power operation.
The U.S. Nuclear Regulatory Commission (NRC), for example, attributes the following benefits to on-line
maintenance in Regulatory Guide 1.182:
* Increased system and plant reliability
* Reduction of plant equipment and system material condition deficiencies that could adversely impact plant
operations
* Reduction of work scope during plant refueling outages.
Nuclear plants are also able to achieve longer fuel cycles and shorter refueling outages through on-line
maintenance. In the United States in the 1980s and early 1990s, most nuclear power plants operated with a
refueling cycle of 12 months and an average refueling duration of three months. Today, U.S. nuclear units operate
on an 18- or 24-month refueling cycle, with average outages of just over one month. The relationship between on-
line maintenance and outage length reduction, operating interval extension and plant economics is well reported in
the literature.
On-line maintenance can also contribute to improved plant safety. By conducting maintenance on-line, plants can
resolve equipment and system issues before they can adversely impact operations. Operational and reliability
improvements have resulted in a factor of three reductions in forced outages and a factor of five reductions in the
automatic SCRAM (trip) rate at U.S. nuclear power plants. Both measures are indicative of improved plant safety.
Figure 1 provides a timeline of key events led by the NRC, the Electric Power Research Institute (EPRl), and the
Nuclear Energy Institute (NEI) in the evolution of on-line maintenance in the U.S. nuclear power industry. Other
industry organizations - the Institute of Nuclear Power Operations, the reactor owners groups, and individual
companies and plants - also contributed to this evolution. Recognition of all such activities, however, is beyond the
scope of this article. The graphic also illustrates the integration of regulations, technical tools, and utility actions
that drove implement ...
This document summarizes an academic journal article that proposes a new approach called Action-Based Defect Prediction (ABDP) to predict software defects. The approach applies data mining techniques like classification and feature selection to historical project data to predict whether future actions will likely cause defects. It aims to identify problematic actions early to prevent defects. The document outlines the ABDP approach, discusses challenges like imbalanced data, and compares results of under-sampling versus over-sampling techniques. It also introduces how the approach could be integrated with Failure Mode and Effects Analysis (FMEA) to further improve early defect prediction.
Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...IRJET Journal
This document discusses using Failure Mode and Effects Analysis (FMEA) to analyze risks at ship-to-shore cranes in the container terminal operations of a green port. It first provides background on container movement and risk analysis in ports. It then describes conducting an FMEA to identify risks from tools like ship-to-shore cranes. The analysis identified the highest priority risks as the wooden bearings on the ship's hold hatch when handled by the crane and the protective pin of the spreader hitting the ship's hold. The document recommends addressing these risks to improve safety.
This document presents a 7-stage framework for analyzing and improving near-miss management programs in the chemical process industry. Near-misses are unplanned incidents that do not result in injury or damage but have the potential to. The framework involves identifying near-misses, reporting them, analyzing causes, determining and disseminating solutions, and ensuring resolutions. Effective near-miss programs encourage employee involvement and can improve safety by addressing accident precursors before harm occurs.
This whitepaper discusses using advanced data management and predictive analytics to improve transmission and distribution asset management. It describes how utilities can leverage non-intrusive field testing and online monitoring methods along with asset criticality, health, and risk analysis. This allows for predictive, top-down and bottom-up asset management strategies. The whitepaper argues that embracing big data analytics and predictive modeling can transform asset management from being condition-based to risk-based. This enables more informed, real-time decision making through scalable situational awareness.
One of the most important issues that organizations have to deal with is the timely identification and detection of risk factors aimed at preventing incidents. Managers’ and engineers’ tendency towards minimizing risk factors in a service, process or design system has obliged them to analyze the reliability of such systems in order to minimize the risks and identify the probable errors. Concerning what was just mentioned, a more accurate Failure Mode and Effects Analysis (FMEA) is adopted based on fuzzy logic and fuzzy numbers. Fuzzy TOPSIS is also used to identify, rank, and prioritize error and risk factors. This paper uses FMEA as a risk identification tool. Then, Fuzzy Risk Priority Number (FRPN) is calculated and triangular fuzzy numbers are prioritized through Fuzzy TOPSIS. In order to have a better understanding toward the mentioned concepts, a case study is presented.
Optimal Maintainability of Hydraulic Excavator Through Fmea/FmecaIJRESJOURNAL
ABSTRACT: The concept of advanced maintenance management technique in the field of heavy earthmoving mining machinery is recently developed in India, and has taken pace with the demand of the same, rising continuously over the years. This paper indulges into considering of hydraulic excavators, which is a large machinery that is designed for excavation and demolitions purposes. It spreads to various sizes and functions. The development of the mining industry has been escalated largely due to the introduction of different types of excavators. These excavators are used to satisfy various mining, industrial and construction needs. The mining excavators are mainly of two types that are used in modern era namely backhoe and dragline, other being suction excavator, long reach/long arm, crawlers and compact excavators, power shovel etc. The data collected and analysis has been done keeping in mind the vicinity of the coal capital of India, where hydraulic excavator is mainly used. It is so, that the same gets prime focus in the paper. The increased penetration of service of the high yield machines in the above-mentioned sectors have made them really important. Halting or stoppages are seen as the bottlenecks, which disturbs the productivity. Seeing the large benefits, and associated productivity and profit loss, the maintenance engineer felt the need to have advanced maintenance of the same. The paper deals with different faults of the excavator, and based on the data acquired, takes on further steps towards carrying out the FMEA analysis which incorporates into it by estimating Severity, Occurrence and Detection of the considered parts respectively, and then Risk Priority Number (RPN) is calculated, ranging from 1 to 1000. The quantitative approach helps in deciding the various maintenance strategies for the different parts and subparts. It is based on the above factors that maintenance plans are initiated, designed and implemented.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to Risks associated with maintenance decisions concerning the afam electricity generation station (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
Risks associated with maintenance decisions concerning the afam electricity generation station
1. Journal of Energy Technologies and Policy www.iiste.org
ISSN 2224-3232 (Paper) ISSN 2225-0573 (Online)
Vol.3, No.7, 2013
21
Risks Associated with Maintenance Decisions Concerning the
Afam Electricity-Generation Station
Tobinson. A. Briggs (corresponding author)
Mechanical Engineering Department,
Faculty of Engineering,University of Port Harcourt,East-West Road, Choba,Port Harcourt, Nigeria
E-mail: briggstee@gmail.com
M.C. Eti
Mechanical Engineering Department, Rivers State University of Science and Technology,
Nkpolu, Port Harcourt, Nigeria.
Abstract
For the nationally-deregulated Nigerian electric-power industry, an increasingly competitive environment has
resulted to an important question being asked, the costs of maintenance in the plants. To survive the competition,
the power stations have to reduce maintenance costs, i.e. handle maintenance effectively. Risk analysis is one
tool decision makers in the power stations can use to help them prioritize as their plan maintenance actions. The
results of such analysis will form a reliable basis for decision making, it is important to consider whether the
quality of the results will vary significantly with the risk analysis approach chosen. This paper presents a factual
dataset, with few risk of failures used to illustrate how Weibull analysis can forecast the risk of failures based on
a small dataset. Using Gas Turbine number 17 of the Afam thermal power station, two perspectives are described:
the statistics and engineering views. The study establishes the importance of the need to analyze and interpret
risk analysis results, before making maintenance decisions
Keywords: risk analysis, statistical/engineering methods, and optimization. Afam thermal power station
1. Introduction
The prioritization of maintenance measures in electricity generating stations has become increasingly important
because of privatization and competition of the power industry. An effective use of resources can be achieved by
using wise risk-based maintenance decisions to guide it, where and when to undertake maintenance. Risk
analysis has had a major impact on the identification of the magnitude and location of faults. In order to sustain
adequate profit margins, the management of Afam power station has to control costs. In doing so, they have to
minimize risks to individuals, the environment and assets. In order to identify risks, in terms of where faults are
likely to be located and how serious they are, risk analysis is used. This provides guidance as to where and when
maintenance effort should be directed. Moubray [20] and Nowplan and Heap [21] point out that maintenance
method, such as reliability-centered maintenance (RCM) which uses function analysis in combination with risk
analysis in prioritizing maintenance actions, are very helpful. There are many different opinions regarding (i)
what risk analysis implies, (ii) how it should be performed and (iii) what terminology should be adopted [34, 11].
Plant/equipment failure and risk analysis for power plants are often based on very few actual failure data. The
risk procedure is to forecast likely future failures. Corrective action can then be taken to mitigate these
forecasted Failures. Barringer and Weber [7] maintain that operating from a fact-based system requires making
failure forecasts even with small number of actual data set. Each dataset usually includes information in the form
of censored data. Good use of engineering judgment and data are used with a Weibayes estimate (i.e. Weibull
analysis using an estimate of the failure mode characterized by a Weibull slope ( β ) to produce a Weibull
distribution relating the age to probability of failure). This should be made to make the datasets understandable
and practical. For some datasets, confidence intervals can be established. Three perspectives exist for evaluating
the problem, namely the statistical view; the engineering view; and the management view [6]. Most businesses
must take risks in order to survive. This requires quantification of risks and the financial exposures incurred. The
chance of failure concept is based on using the current data in the form of mean time between failure (MTBF)
and its inverse which gives a failure rate.
Resources can be used more effectively by using risk-analysis based maintenance decisions to guide as to where
and when to perform maintenance functions. Risk analysis requires a careful identification, systematic approach
with clear aims and goals, to overcome them. It requires having a sufficiently competent analyst to evaluate and
understand the approach and result from the risk analysis performed. If the risk analysis can be well performed
and executed for the Afam power station, it will help to control, interpret and evaluate risks and thereby obtain
reliability results so that maintenance efforts are expended according to priority. Making maintenance decisions
based on risk analysis and evaluation is an effective way of improving preventive maintenance. The integration
of risk analysis strategy into the Nigerian electric power-stations maintenance protocols will enhance reliability
productions of plant/systems for proactive maintenance and reliable power supply.
2. Journal of Energy Technologies and Policy www.iiste.org
ISSN 2224-3232 (Paper) ISSN 2225-0573 (Online)
Vol.3, No.7, 2013
22
In risk analysis, the total asset is scrutinized by identifying the more likely risk sources in each sub-process. The
percentage of each risk source that contributes to the risk of each subsystem is computed. For example, the
percentage of total estimated asset risk in a subsystem, such as a gas scrubbers, turbine, compressor or filter, can
be analyzed. In order to make proper risk-based maintenance decision, Buckland and Hannu [12] present a
comparative study based on three independent risk analyses performed on a specific hydro-power plant. The
study establishes the importance of a well-planned specification and the need to analyze and integrate risk
analysis results before making maintenance decisions. Buckland and Hannu [12] presented a report on the
analysis and evaluation of risk situation.
Mathew and Kennedy [18] presented a model based on failure due to random loads, and then followed it up with
a strategy for preventing or minimizing such failures at minimum cost. Random shocks are a leading cause of
equipment failure. The shocks arise because of large variations in the value of parameters like operating loads,
voltages, pressures, thermal loads, contamination, tolerances on clearances and alignments. However, few
maintenance strategies consider random shocks and their contribution to failure due to the cumulative effect of
random shocks has also been referred to as a non-self announcing failure. Wortinan et al, [25] have developed a
model for passive elements like alarms and protection systems which suffer deterioration each time they are
triggered off. Aven and Gaarder [4] develop an optimal replacement procedure under shock load condition.
Chelbi et al [13] presented an inspection strategy for random failures to guarantee robust schedules. An
important influence on product reliability is temperature: electronic control devices are highly susceptible to
increased failure rates at elevated temperatures. Barringer [9] concludes that there are four environmental-stress
factors, which substantially influence the occurrence of degradation of faults, incurably thermal cycling,
vibration, corrosion and frequency of mechanical stress cycles. He further notes that these stresses are
accompanied by interaction and influences of lesser stress
The transformation of maintenance strategies brought about a new pace to already fast growing for strategies.
There is a great need, which can combine RCM and total productive maintenance (TPM) in order to improve a
systems reliability and availability. Hazard identification can be performed by means of a checklist, mode effect
and analysis (FMEA), failure-mode effect mode effect and criticality analysis (FMECA) and also fault-tree
analysis (FTA). It is useful to identify individual and asset risk when the most serious risk sources are being
considered. Total individual or total asset risk is of interest when comparing risk costs between different plants
or subsystems [12]. Al–Najjair [2] maintain that in order to identify the maintenance significant items (MSIs) of
a system, a comprehensive survey of all components of the system is carried out, e.g. by a FMECA. For example,
one-way of selecting a significant item is dependent on the value of its risk priority number component.
RPN = FI x PC x FDF (1)
Where FL, is the failure intensity, FC, is failure criticality and FDF is the probability that a failure is not detected.
If the RPN of an item exceeds a predetermined value, then such an item is considered to be significant with
respect to maintenance. The most appropriate maintenance strategy is a failure-based process. Operating from a
fact-based system often requires making a failure forecast based on a small set of actual data Barringer [8]. The
data set usually includes information in the form of carefully examined data. Good use of engineering judgment
is needed and the data used via Weibayes estimates. (Weibayes is a Weibull analysis using an estimate of the
failure mode characterized by a slope ( β ) to produce a Weibull distribution (relating to age and probability of
failure) in order to make the data sets capable of being used more clearly. For some data sets, confidence
intervals can he established. Failures can occur by normal ageing or in specific events (i.e. necessarily not time
related/age alone). Equipment failure can also occur by combination of events, such as inferior workmanship,
ageing, and accumulation of dirt or foreign elements in the compressor [8, 20].
2. Theoretical analysis
The pertinent theories include all the equations and formulae that are used to quantify all measurable parameters
in solving risk related problems of this study. Risk is defined as a combination of the frequency or probability of
occurrence and the consequence of a specified hazardous event [10].
The amount of risk is defined as the probability of failure F(t) times the consequence of failure [17], i.e.
Reliability = Ff Cf (2)
$E=Ff $Cf (3)
Where: $E = cost of risk exposure in dollars
Ff = probability of failure
$Cf = cost of the consequence of the failure occurring in dollars
2.1. Mean time between failures (MTBF)
Mean time between failure MTBF which is a yardstick for both reliability and statistical problems, measures the
time between any two consecutive failures.
3. Journal of Energy Technologies and Policy www.iiste.org
ISSN 2224-3232 (Paper) ISSN 2225-0573 (Online)
Vol.3, No.7, 2013
23
MTBF=
( )
( )periodthatduringfailureofNumber
timeoperatingTotal
(4)
The failure rate ( λ ) is the reciprocal of MTBF
MTBF
ei
1
.. =λ (5)
2.2. Expected life of equipment
This is the working-time period the item is required to function (Baring all unforeseen circumstances) to deliver
its designed function effectively. Statistically, equipment life is most widely evaluated at 90% Poisson
confidence level. Citing Barringer [5] showed Poisson confidence levels for on failure exponential, at 95% and
5% confidence levels. At 95% confidence level, expected number of failures Ef(95)= 4.7439. Barringer [5]
Hence Expected Life at 95%:
E1(95%) = MTBF/4.7439 i.e. El(95%) = MTBF/Ef(95%) (6)
At 5% confidence level, expected number of failures Ef(95%) = 0.3554 Hence expected life at 5% confidence level:
El (95%) = MTBF/Ef(95%) (7)
So, at 90% confidence interval that is (95% - 5%), the expected life at 90% confidence level: El (90%) lies
between El (95%) and El (5%). A Poisson failure is one that occurs outside (i.e. premature) the
established/forecasted interval of time; Hence the probability of Poisson failure
F(t)p = 1 – (
t
e λ−
) (8)
Where, F(t)p = probability of Poisson failure
λ = the failure rate of the system
T = time before the failure
2.3. Characteristic life (η )
The characteristic life of equipment is the range of interval of time during which the equipment is expected to
operate with minimal or no problems (barring unforeseen circumstances). It is after this problematic (i.e. less
profitable to operate). In Weibull probability characteristic lifeη is deduced at 36.8% reliability or 63.2%
cumulative distribution function (unreliability).
2.4. Cumulative failure
If Weibull slope β <1, failure are in infant mortality failure mode and β =1 is for chance failure mode and
β >1 is for wear out failure mode. Cumulative failure (Nt) is the total time between failures for the dataset of
the considered system, and can be defined by:
N(t) = βλ A
t* (9)
Where: λ = intercept of the y-axis at the time = 1
β = Weibull slope (shape factor) beta
t = cumulative time of failure
The reliability can be defined by
( ) )(exp)/(exp tMTBFtR t λ−=−= (10)
It can also be expressed in Weibull terms as = R(t) =
beta
etat )/exp(− . Where λ = constant failure rate and
MTBF = mean time between failures. MTBF is easier to understand than a risk model to predict the number of
failures expected to occur during a period (probability number) [1]. For exponentially distributed failure modes,
MTBF is a basic figure-of-merit for reliability. The failures of most equipment must be analyzed from small
samples this can be accomplished using the very practical reliability Weibull analysis [20] for each failure mode.
The mathematical probability of failure (i.e. unreliability) Ft:
Ft = 1- [(N –n) + 1]/ (N + 1) (11)
Where, Ft = probability of failure
N = cumulative failures
η = failure numbers
The availability
A = (uptime)/ (uptime + downtime) (12)
The maintainability
( ) ( ) )exp(1/exp1 tMTTRttM µ−−== (13)
Where µ = frequency at which maintenance is undertaken
4. Journal of Energy Technologies and Policy www.iiste.org
ISSN 2224-3232 (Paper) ISSN 2225-0573 (Online)
Vol.3, No.7, 2013
24
The risk priority number
RPN=(S) (O) (D) (14)
where: Severity (S): a rating of the seriousness of each potential effect;
Occurrence (O): a rating of the likelihood of occurrence for each potential failure; and
Detection (D): a rating of the likelihood of detecting the cause of a failure.
Critically = (Q) (FMFR) (P1) (15)
Where; the unreliability (Q) is the probability of failure. The failure-mode ratio unreliability (FMRU) is the ratio
of the system’s unreliability that can be attributed to the particular failure mode. For example, if an item has four
failure modes, then one mode may account for 40% of the failures, a second may account for 30% and the two
remaining modes may accounts for 15% each. The probability of loss (PL) is the probability that the failure mode
will cause a system failure (or will cause a significant loss of performance). This is an indication of the severity
of the failure.
3. Data Collection
This was achieved mainly by monitoring and observing (i) the operation and maintenance of gas turbine (GT17)
and (ii) failed units in Afam thermal power station see the Appendix. For more than seven weeks experiment,
information was collected and life data collected from manufacturer’s operation/maintenance manuals,
maintenance and operation department’s failure documentations as well as articulate discussion/interviews with
maintenance and operation staff. Useful information was also gained concerning similar plants using wise
engineering judgments [14, 19, and 20]. This study is for gas turbine GT17 which is one of the twenty-two such
turbine at the Afam thermal power station. The data was collected over the period from January 2004 to June
2011 for some major components, namely the air filter, air compressor, gas scrubber, combustion chamber and
turbine.
The performance data were taken from:
1. operation’s daily equipment-downtime logbook;
2. maintenance of equipment daily repair logbook;
3. manufacturer’s maintenance/operation manual
4. information from similar plants obtained via the internet; and
5. interviews with key personnel involved in the maintenance and operation of turbine GT17.
The collected data were used for calculating the unreliability (i.e. probability of failure), consequence of failure,
cumulative number of failures, cumulative time before failure, and time before next failure, characteristic life
and cost of exposure for the next risk. The data are shown in, Table 3, Table 4 and in Table 5. To analyze the
data and undertake the necessary calculations, the following estimates and assumptions were made using
personal judgment, as well as prevailing local and world-class practices. However, only engineering and
statistical methods are considered in this study.
4. Engineering method
From an engineering viewpoint of a dataset for the turbine, failure forecasts can be made using good practices
and the Crow/AMSAA plot as described in Abernethy [20]. When the distribution mode of failure is by chance
events (in Weibull analysis, β = I), a second failure would he predicted to occur when N (t) = βλ v
t1 where N
= cumulative failure, =1λ intercept on the Y-axis at time = I for cumulative failures, β = Weibull slope, and t
= time. For the wear-out failure mode, indicative of increasing hazard rate (i.e. instantaneous failure rate), the
Weibull line slope β would be > 1. The probability of failure is constructed with commercially available
software, [2] which gives Weibeyes of estimated life. Probability of failure (i.e. unreliability), equation 11 and
Table 4 were used in evaluating F(t) i.e. the probability of failure of the turbine. Figure 3 shows Crow/AMSA
using β = 3
5. Discussion and result
From an engineering viewpoint, failure forecasts can be made using good practices of engineering and
Crow/AMSAA plots as shown in figures 2 and 3. The condition for chance of failure is when β =1. Wear-out
failures modes give an indication of increasing hazard rate, when the line’s slope β >1. The engineering
method gives the failure and the life of a plant or component based on the practical observations along with an
estimate of financial exposure.
For the statistical method the first step is to find statistic computation to use as yardstick and the most often used
value is mean time between failures. The other is the expected life of the item. Here the Poison confidence level
5. Journal of Energy Technologies and Policy www.iiste.org
ISSN 2224-3232 (Paper) ISSN 2225-0573 (Online)
Vol.3, No.7, 2013
25
for failure is used. The 90% confidence interval lies between 132 and 1736 days/failure. More accurate statistical
method for reducing uncertainty is to get more failure data. Reliability often improves by reducing human errors
or failures bring expectations for improving availability, decreasing downtime, improved secondary failures and
risks.
6. Conclusion and Recommendations
Weibull analysis shows high level problems since from the graphs patterns it did not tell what is wrong or where
the problem exists. As a result, the power-station’s assets utilization reports must be used to identify specific
problems for corrective actions. Risk problems should be identified with time and money so that every one can
understand them, then fixes them on a priority basis so that power generation could be more efficient and cost
effective (reducing cost of operation and maintenance).
Because decision making in practice is often characterized by the need to satisfy multiple goals, the formulation
of multi-criteria decision making is a worthwhile topic for risk analysis research in Afam thermal-power station.
Reliability engineering theory, RCM and total productive maintenance (TPM) policies will provide excellent
guidance for the maintenance management in Afam thermal power station. The Afam electric-power station with
frequent failures needs the implementation of failure prevention strategies. In a deregulated environment, with
many new plant and equipment designs, emergency capital investments are put to greater and greater risks. This
increases the need for reliability tools for the maintenance assessment for the Afam thermal power station and
other similar stations. Recognizing futures risk analysis requires knowledge, experience, mental skills, tools,
anti-failure standards assessment- perfect link with operators’ plant monitoring assessments. With competitive
electric-power generating in the deregulation, Afam thermal power station should have a strong maintenance
engineering culture, should be able to establish operating and maintenance standards that would lead to
improved reliability and cost-effective reducing undesirables, and unexpected events. Effective system engineers
require people with multi-skills, operating experience, and general engineering competitiveness supported by
cost-effectiveness awareness and computer information management skills. August [3] points out that flexible
powered skilled system/plant reliability engineers favourably influence plant operations by reducing operating
costs. In the Afam electric-power station, the lack of understanding of the problems, their causes, options, lack of
value-added benefits, or cost-effectiveness, the combination of regulated environments and traditional
maintenance aversion to cost awareness have combined to increase the need for risk analysis and evaluation.
What can be learnt from this study is that careful preparation of risk analysis, ensuring a systematic approach
with clear aims and goals is desirable whenever a risk analysis is being undertaken. The desired functions of a
system are the main reason why the system exists at all. Therefore, the focus for risk analysis for the Afam
thermal-power station should be based on actual data from the system and subsystems and maintenance policies
based on the organizations missions’ goal and objectives.
References
1. Abernethy, R. C. (1996), The New Weibuell Handbook, 3rd
ed. Publishing Company. Houston.
2. Al-Najjar, S. (1991), On the Selection of the Condition Based Maintenance for Mechanical Systems, in
Holmberg, K. and Folkeson, A. (Ed.), Operational Reliability and System Maintenance, Elsevier, pp. 153-73.
3. August, .J. (1999), Applied Reliability-Centered Maintenance, Penn well Publishing Tulsa, Oklahoma.
4. Aven, T. and Gaarder, S. (1987), Operational Replacement in a Shock Model: Discrete Time, J. Applied
Probability. Vol. 24, pp. 281-7.
5. Barringer, H. P. (1996). Proceeding, Annual Reliability and Maintainability Symposium Cumulative
index pp Cx -29 for LCC, Evans Associates, 604 Vickers Avenue, Durham.
6. Barringer, H.P and Webber, D.P (1995), Where is my data for making reliability improvement. Fourth
International conference on process plant reliability, Gulf Publishing Company, Houston Texas.
7. Barringer, H.P and Webber, D.P (1996), Life Cycle cost tutorial. Fifth International conference on
process plant reliability, Gulf Publishing Company, Houston Texas.
8. Barringer, H.P. (1999), Monte-Carlo Simulations, (Online Serial) http: // www. Barringer/.com/lcc.
9. Barringer, H.P. (2000), Weibull Database, (Online Serial) http: // www. Barringer /.com/wdbase.htm.
10. Barringer, H.P.(2002), Reliability Engineering/Principles, (Online Serial) http: // www. Barringer /.com
read.htm.
11. Buckland, F. (1999), “Reliability Centered Maintenance Identification of Management and
Organizational Aspect of Importance When Introducing RCM” Licentiate Thesis Division of Quality
Technology and Statistics, Lulea University of Technology, Lulea
12. Buckland, F. and Hannu, .1. (2002). We Can Make Maintenance Decision on Risk Analysis, Journal of
Quality in Maintenance Engineering. Vol. 8.1, pp. 77-91.
6. Journal of Energy Technologies and Policy www.iiste.org
ISSN 2224-3232 (Paper) ISSN 2225-0573 (Online)
Vol.3, No.7, 2013
26
13. Chelbi, A., Waarder, A. and Ramurdhui, A. (1996, An Inspection Strategy for Randomly Failing
Machine to Guarantee Robust Schedules, IEE Symposium on Emerging Technologies and Factory Automation,
IEEEE Vol. I, pp. 248-53.
14. Fulton Wes Winsith (2001), Weibull probability plotting software, http://www.weibullnew.com
15. Hannu, J. and Buckland, F. (1999), Analysis and Evaluation of Risk Studies Internal Report at Vatten
Fall, AB Vatten-Kraft, Lulta
16. IEC 603000-3-9 (1995), Dependability Management Part 3 Application Guide-sectioning: Risk
Analysis of Technological Systems, CEI/IEC 603000-3-9, International Electro technical Commission, and
Geneva.
17. Kennedy, R. (2003), Examining the Process of RCM and TMP, Australian Centre for TPM. (Online
Serial) http:// www.plant-maintenance .com/articles/remvtpm.shtm/.
18. Matthew, S. and Kennedy, D. (2002), Minimizing Equipment Downtime Under Shock Load Conditions.
International Journal of Quality and Reliability Management Vol. 19, No. 1, pp.90-96.
19. Moubray, J. (1991), Reliability-Centered Maintenance, Butterworth-Heinemann, London.
20. Moubray J. (1999), Reliability-Centered Maintenance, An Introduction, Aladon Ltd., Asheville, North
Carolina.
21. Nowplan, S. F. and Heap, H. F. (1978), Reliability-Centered Maintenance, National Technical
Information Service, US Department of Commerce, Springfield, Virginia, VA, AD I A066-579.
22. ReliaSoft (2002), Failure Modes, Effects and Criticality Analysis. (Online Serial) http;//
www.reliasofl.com/fmca. htm.
23. Saranga, H. (2002), Relevant Condition-Parameter Strategy for an Effective Condition-based
Maintenance, Journal of Quality Maintenance Engineering, Vol. 8, No, 1, Pp. 92-105.
24. Townsend, T. (1998), Assessment-The Maintenance Perspective, Maintenance and Asset Management.
Vol. 13, No. 1, pp. 13-2.
25. Wortinan, M. A. Klutke, G. and Ayham. H. 11994), “A Maintenance Strategy for Systems Subjected to
Deterioration Governed by Random Shocks, IEEE Transactions of Reliability, Vol. 43, No. 3, September,
PP.439-45.
APPENDIX: Observed and conclusions
Fig. 1: Line diagram of the major components and processes of gas turbine (GT17)
-NIGERIAN GAS COMPANY
7. Journal of Energy Technologies and Policy www.iiste.org
ISSN 2224-3232 (Paper) ISSN 2225-0573 (Online)
Vol.3, No.7, 2013
27
Fig. 2: Crow/AMSA plot for next failure
Fig. 3: Weibeyes estimate of expected life
Table I: Engineering methods results
Filter Compressor Scrubber Chamber turbine
β (from Crow-AMSAA Plot)
Time before next failure (days)
Characteristic life η (days)
β (from Weibeyes plot)
Cost of Exposure ($)
Risk rate before η ($/day)
Risk rate beyond η ($/day)
1.4
587
700
6
105,500
555
670
2
354
560
4
240,000
725
1,207
1
743
800
6
100,000
284
536
I .3
864
1150
3.3
192,000
243.2
363.43
1.3
585
700
3
240,00c
760
840
Table 2: Statistical method results
S/N TBF (Days) Probability P(t) F(t)x 100%
I 353 0,1428 14.28
2 454 0.2857 28.57
3 647 0.4285 42.85
4 685 0.571 57.10
5 788 0.714 71.40
6 817 0.857 85.70
Age of Failure (days)
8. Journal of Energy Technologies and Policy www.iiste.org
ISSN 2224-3232 (Paper) ISSN 2225-0573 (Online)
Vol.3, No.7, 2013
28
Table 3: Failure data arranged in ascending order of magnitude
S/N TBF (days)
1 353
2 454
3 647
4 685
5 788
6 817
Table 4: Statistical results
Parameter Filter Compressor Scrubber Chamber Turbine
Time before failure (day) 618.2 469 743 908 624
MTBF (days failure) 618.2 469 743 908 624
Failure rate (failure/day) 0.00162 0.002132 0.001346 0.001101 0.001603
Expected life (days) 130 to 1739 99 to 1320 157 to 3091 191 to 2555 132 to 1756
Cost of exposure ($) 484.75 1216.5 336.32 550 934.13
Table 5: GT-17’s shows major subsystems failure history from January 2004 to June 2011
a) Air Filter
S/N Date Failed TBF
(days)
Date
Restored
Cause of Failure
1 05/02/04 780 10/02/04 Filters clogged with contaminants
2 11/09/05 509 17/09/05 Silencers mounting corroded
3 10/01/07 756 07/01/07 Filters blocked with continuants
4 06/06/08 491 11/06/08 Filter blocked
5 23/01/10 555 03/02/10 Filter blocked
b) Air compressor
1 02/02/04 455 12/02/04 Rotor and Stator blades pitting (blades failure)
2 06/04/05 740 15/04/05 Rotor blades fouling (blades failure)
3 13/12/05 253 27/12/05 Warped rotor (rotor failure)
4 06/07/07 507 21/07/07 Blades failure due to fatigue (blades failure)
5 25/01/08 371 15/08/08 Journal bearing broken (bearing failure)
6 25/01/10 467 01/02/10 Stator blades crack (blade failure)
7 30/06/11 491 01/07/11 Rotor blades highly pitted (blade failure)
c) Scrubber
1 10/11/04 722 17/11/04 Condensate carry over to combustion chamber (NGC re-
heater failure)
2 02/05/05 500 09/05/05 Insufficient gas supply to C.C. (Metering system corroded)
3 07/03/07 610 13/03/07 Wet gas supply to combustion chamber (NGC re-heater
failure)
4 27/07/11 1139 03/08/11 Condensate introduction into combustion chamber (NGC re-
heater failure)
d) Combustion Chamber
1 04/06/04 857 15/06/04 Chamber wall tiles cracked
2 03/03/06 920 19/03/06 Ignition failure
3 09/12/07 562 16/12/07 Wall tiles cracked
4 03/11/11 1294 13/11/11 Chamber over-heated
e) Turbine
1 04/08/04 454 16/08/04 Rotor blades fouling (blades failure)
2 19/07/05 647 17/07/05 Blade cracked (thermal shock) (blade failure)
3 19/10/07 788 27/10/07 Rotor and stator blade damage (thermal failure)
4 03/04/08 353 13/04 /08 Warped Turbine Shafts (shaft failure)
5 12/09/11 685 23/06/11 Rotor blades fouling (blades failure)
9. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
CALL FOR PAPERS
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. There’s no deadline for
submission. Prospective authors of IISTE journals can find the submission
instruction on the following page: http://www.iiste.org/Journals/
The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar