This document discusses using data mining techniques to extract useful information from maintenance and compliance data stored in offshore asset databases. Simple reporting and visualization methods can reveal issues like clusters of maintenance tasks being completed together, suggesting past non-compliance. Keyword searches of maintenance data can also uncover issues. The goal is to help owners understand historical regulatory compliance and its impact on risk assessment, and optimize maintenance for safety, efficiency and cost-effectiveness as assets age beyond original expectations.
The need for a transition from a traditional maintenance practices to a dependency around data that uses analytics to alter maintenance practices has the potential to add value while creating new rewards and challenges to the utility world.
Microsoft Exchange Server, being one of the most important production systems in many organizations is a system consisting of many moving parts that need thorough and secure maintenance. In most companies groups of two or significantly more IT professionals manage the Exchange organization configuration and without detailed auditing of who did what, where, and when it is impossible to detect inadvertent, unauthorized or sometimes accidental changes done by mistake. The white paper describes different approaches to regular and consistent auditing of changes to Exchange server configuration and permissions.
This document provides an overview of information and systems concepts. It defines data and information, explaining that information is data that has been processed to convey meaning. It discusses the different types of information needed in organizations, including operating, management, trigger, and background information. It also covers systems theory concepts like objectives, properties, and types of information systems.
This chapter introduces key concepts regarding accounting information systems. It defines a system as interrelated components that interact to achieve a goal. An accounting information system (AIS) is described as a system that collects, records, stores, and processes data to produce information for decision makers. The chapter discusses why studying AIS is important, noting that it is fundamental to accounting work and critical for career success, especially in auditing, tax, and management roles. It also explores the relationships between AIS, organizational strategy, information technology, and culture.
This document provides an overview of strategies for accelerated data conversion and effective long-term data management. It discusses the importance of understanding data lifecycles and having tools that can validate data during conversion, enable ongoing maintenance, and allow for reorganization during business changes. Key aspects include configuring validation rules during conversion, developing "get clean, stay clean" processes for maintenance, and having repeatable strategies for rollouts and reorganizations to continually update data as an organization evolves. Managing data through its entire lifecycle is critical to reducing risks and ensuring project and business success.
Dynamic Talks: "Implementing data quality automation with open source stack" ...Grid Dynamics
The quality of business decisions, machine learning insights, and executive reports depend on the quality and integrity of the underlying data. There are many ways that data can get corrupted in an analytical data platform from de-synchronization with the system-of-record to defects in data pipelines. We will show how to detect and prevent data corruption with automation, open source tools, and machine learning.
With a growth in interest in ‘big data’ as electric grids evolve and data sources become more common and more productive, there needs to be a discussion of the management of data in a secure manner, and the role of analytics to provide information and have ‘meaning’. This paper looks at a number of challenges that are beginning to be faced, and opportunities to ensure that the Future Grid is secure. Challenge 1 is the management of ‘big data’, which may provide value if appropriately viewed and analyzed; Challenge 2 is the management of security, for both data and systems which use the data; Challenge 3 is the need for appropriate urgency in analysis and action; Challenge 4 is to understand the meaning of the data and associated analyses, but also to understand the limits of our understanding.
The need for a transition from a traditional maintenance practices to a dependency around data that uses analytics to alter maintenance practices has the potential to add value while creating new rewards and challenges to the utility world.
Microsoft Exchange Server, being one of the most important production systems in many organizations is a system consisting of many moving parts that need thorough and secure maintenance. In most companies groups of two or significantly more IT professionals manage the Exchange organization configuration and without detailed auditing of who did what, where, and when it is impossible to detect inadvertent, unauthorized or sometimes accidental changes done by mistake. The white paper describes different approaches to regular and consistent auditing of changes to Exchange server configuration and permissions.
This document provides an overview of information and systems concepts. It defines data and information, explaining that information is data that has been processed to convey meaning. It discusses the different types of information needed in organizations, including operating, management, trigger, and background information. It also covers systems theory concepts like objectives, properties, and types of information systems.
This chapter introduces key concepts regarding accounting information systems. It defines a system as interrelated components that interact to achieve a goal. An accounting information system (AIS) is described as a system that collects, records, stores, and processes data to produce information for decision makers. The chapter discusses why studying AIS is important, noting that it is fundamental to accounting work and critical for career success, especially in auditing, tax, and management roles. It also explores the relationships between AIS, organizational strategy, information technology, and culture.
This document provides an overview of strategies for accelerated data conversion and effective long-term data management. It discusses the importance of understanding data lifecycles and having tools that can validate data during conversion, enable ongoing maintenance, and allow for reorganization during business changes. Key aspects include configuring validation rules during conversion, developing "get clean, stay clean" processes for maintenance, and having repeatable strategies for rollouts and reorganizations to continually update data as an organization evolves. Managing data through its entire lifecycle is critical to reducing risks and ensuring project and business success.
Dynamic Talks: "Implementing data quality automation with open source stack" ...Grid Dynamics
The quality of business decisions, machine learning insights, and executive reports depend on the quality and integrity of the underlying data. There are many ways that data can get corrupted in an analytical data platform from de-synchronization with the system-of-record to defects in data pipelines. We will show how to detect and prevent data corruption with automation, open source tools, and machine learning.
With a growth in interest in ‘big data’ as electric grids evolve and data sources become more common and more productive, there needs to be a discussion of the management of data in a secure manner, and the role of analytics to provide information and have ‘meaning’. This paper looks at a number of challenges that are beginning to be faced, and opportunities to ensure that the Future Grid is secure. Challenge 1 is the management of ‘big data’, which may provide value if appropriately viewed and analyzed; Challenge 2 is the management of security, for both data and systems which use the data; Challenge 3 is the need for appropriate urgency in analysis and action; Challenge 4 is to understand the meaning of the data and associated analyses, but also to understand the limits of our understanding.
Jayant Tondon has over 17 years of experience in banking operations, digital transformation, product management, and consulting. He is currently seeking a position with a reputed organization in banking, financial services, or consulting. His experience includes transforming business processes, product implementation, and managing teams in the banking sector across multiple countries.
The curriculum vitae provides information on Elmer L. Prietos, including his personal details and work experience as a maintenance mechanic for various companies over 13 years, with a focus on industrial equipment such as generators, pumps, and mining machinery. He also lists his educational background of vocational and secondary school, and provides details on training courses completed in areas like automotive trade and mining equipment.
This document provides a resume for John Mangawang, including personal details, education history, skills, work experience, affiliations, and interests. John graduated from Technological Institute of the Philippines in 2010 with a Bachelor's degree in Electronics Engineering. He has over 10 years of experience in electronics engineering, software support, and technical roles, including positions at SMART Telecommunications, MCI Global, and GXS. His skills include EDI software, mapping, testing, networks, trading platforms, and customer service.
I’m a highly creative Web Designer with experience in both the public and private sectors.
I have an advanced knowledge of user experience and I’m looking to develop my career in a supervisory role focusing on high profile projects.
The document discusses the real-life environment of RF receivers and challenges in testing receivers. It describes common interference types receivers may encounter, such as multipath interference which can cause constructive or destructive signal interference. It then discusses standard equipment for testing receivers, including channel emulators and signal generators, which can model complex modulation and multipath but require expert users and are expensive. Finally, it covers commercially available RF recorders, their specifications, limitations, and examples of use cases for validating RF broadcast and telecom receivers.
React Native Taipei Meetup #1 discussed why developers are now meeting up to discuss React Native. React Native allows building mobile apps using only JavaScript by compiling code to native iOS and Android apps. It has three pillars - touch handling, native components, and flexible styling and layout. Developers can write one codebase using JSX and ES6 that compiles to both platforms, simplifying development by avoiding maintaining two separate codebases for iOS and Android. Features like cascading updates, declarative code, flexbox styling, network requests and tab bars were also highlighted.
IL CAMMINO DELLA RIFORMA DELLA GIUSTIZIA
LE SCHEDE CHE SEGUONO RAPPRESENTANO IL BILANCIO E LO STATO DI AVANZAMENTO DEI
LAVORI A 1 ANNO DAL VARO DELLA RIFORMA.
CHI GOVERNA VA VALUTATO IN BASE AI FATTI PRODOTTI E AI RISULTATI CONSEGUITI.
IN QUESTO REPORT IL MINISTERO DELLA GIUSTIZIA «DÀ CONTO»
DI QUESTO LAVORO E OFFRE A TUTTI LA POSSIBILITÀ DI VALUTARLO.
IL CAMMINO DELLA RIFORMA NON SI FERMA OVVIAMENTE QUI,
GIÀ NELLE PROSSIME SETTIMANE ALTRI IMPORTANTI PROVVEDIMENTI E AZIONI ORGANIZZATIVE
VEDRANNO LA LUCE.
Basavanagowda M C has over 6 years of experience in electrical harness detailing and installation drawing creation, sheet metal design, A-bracket installation drawing creation, deviation analysis, overhead panel drawing creation, technical note creation and updating, and QC1 for A350. He has worked with companies including Airbus, Alten Technologies, L&T, and Tool Craft on aircraft programs such as the A400M, A350, and A380. He is highly skilled in CATIA V5, Enovia-VPM, PDM Link SSCI, AutoCAD, TREND, ZAMIZ, GILDA, and MS Office.
The document outlines standards for hospitals and healthcare providers developed by the National Accreditation Board. It discusses that standards are developed based on multiple information sources and are organized around important hospital functions with a focus on patient and staff safety. The standards set minimum requirements for accreditation and are periodically revised. There are 10 chapters covering 102 standards and 636 measurable elements that organizations must meet to be accredited. Sections cover patient-centered care standards and organization-centered standards such as quality improvement and facility management.
Victoria Tasmania Maximo User Group - May 2016Helen Fisher
This document discusses the benefits of mobile data collection including reducing data entry errors, working smarter and safer with data in hand, capturing more data, providing immediate updates and receiving feedback. It also lists process improvements such as better work management, improved service levels, better inventory tracking and lower operating costs. Finally, it notes the business impact and provides tips for choosing a device, starting simple, continuous application delivery, end-to-end security, and piloting first.
Maximo has traditionally been a very powerful maintenance management tool, but with pressures from a new younger workforce and competing products, there are a lot of new and exciting investment areas coming up for 2016.
Maximo and a roadmap for your IoT journeyHelen Fisher
For IBM customers, the Internet of Things (IoT) enables businesses to improve operations, rapidly connect devices and to lower costs. This is why IBM Maximo Asset Management now sits neatly in the Watson IoT portfolio. There are many business cases out there today for linking IoT and Maximo, IBM are not, however, diverting from their core value statements. Maximo is still about understanding asset availability, preventing failures, maximising resources, increasing reliability, understanding inventory needs and costs, and plant safety. Check out the key investment areas for 2016 and beyond.
Data Lake-based Approaches to Regulatory-Driven Technology ChallengesBooz Allen Hamilton
The document discusses how a data lake approach can help financial institutions address regulatory challenges more effectively than traditional ETL approaches. A data lake allows raw data to be ingested rapidly and indexed as needed for analysis, reducing preparation time. It also enables unified queries across all data sources and quick fusion of multiple sources. This significantly reduces operational complexity and costs while improving security, flexibility, and the ability to address evolving requirements. The data lake approach is well-suited for challenges involving streaming analytics, point-to-point data marts, or data-heavy ETL requirements. Booz Allen has successfully implemented this approach for government clients to prototype solutions around critical applications.
All over the world, water utilities must to face daily multiple difficulties in order to ensure water supply and sewer service to the citizens. Most of these problems are essentially related with technic and technology. Others with processes, organizations and the rest of the Industry’s implied actors. Others with regulatory issues and the administrative tangled mess around water. And so on… we could follow identifying more factors that, in one way or another, can represent a problem during the critical water supply process
Overlooked aspects of data governance: workflow framework for enterprise data...Anastasija Nikiforova
This presentation is a supplementary material for the article "Overlooked aspects of data governance: workflow framework for enterprise data deduplication" (Azeroual, Nikiforova, Shei) presented at The International Conference on Intelligent Computing, Communication, Networking and Services (ICCNS2023).
Abstract of the paper: Data quality in companies is decisive and critical to the benefits their products and services can provide. However, in heterogeneous IT infrastructures where, e.g., different applications for Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), product management, manufacturing, and marketing are used, duplicates, e.g., multiple entries for the same customer or product in a database or information system, occur. There can be several reasons for this, but the result of non-unique or duplicate records is a degraded data quality. This ultimately leads to poorer, inefficient, and inaccurate data-driven decisions. For this reason, in this paper, we develop a conceptual data governance framework for effective and efficient management of duplicate data, and improvement of data accuracy and consistency in large data ecosystems. We present methods and recommendations for companies to deal with duplicate data in a meaningful way.
SEAMLESS AUTOMATION AND INTEGRATION OF MACHINE LEARNING CAPABILITIES FOR BIG ...ijdpsjournal
The paper aims at proposing a solution for designing and developing a seamless automation and
integration of machine learning capabilities for Big Data with the following requirements: 1) the ability to
seamlessly handle and scale very large amount of unstructured and structured data from diversified and
heterogeneous sources; 2) the ability to systematically determine the steps and procedures needed for
analyzing Big Data datasets based on data characteristics, domain expert inputs, and data pre-processing
component; 3) the ability to automatically select the most appropriate libraries and tools to compute and
accelerate the machine learning computations; and 4) the ability to perform Big Data analytics with high
learning performance, but with minimal human intervention and supervision. The whole focus is to provide
a seamless automated and integrated solution which can be effectively used to analyze Big Data with highfrequency
and high-dimensional features from different types of data characteristics and different
application problem domains, with high accuracy, robustness, and scalability. This paper highlights the
research methodologies and research activities that we propose to be conducted by the Big Data
researchers and practitioners in order to develop and support seamless automation and integration of
machine learning capabilities for Big Data analytics.
SEAMLESS AUTOMATION AND INTEGRATION OF MACHINE LEARNING CAPABILITIES FOR BIG ...ijdpsjournal
The paper aims at proposing a solution for designing and developing a seamless automation and integration of machine learning capabilities for Big Data with the following requirements: 1) the ability to seamlessly handle and scale very large amount of unstructured and structured data from diversified and heterogeneous sources; 2) the ability to systematically determine the steps and procedures needed for
analyzing Big Data datasets based on data characteristics, domain expert inputs, and data pre-processing component; 3) the ability to automatically select the most appropriate libraries and tools to compute and accelerate the machine learning computations; and 4) the ability to perform Big Data analytics with high learning performance, but with minimal human intervention and supervision. The whole focus is to provide
a seamless automated and integrated solution which can be effectively used to analyze Big Data with highfrequency
and high-dimensional features from different types of data characteristics and different application problem domains, with high accuracy, robustness, and scalability. This paper highlights the research methodologies and research activities that we propose to be conducted by the Big Data researchers and practitioners in order to develop and support seamless automation and integration of machine learning capabilities for Big Data analytics.
Keys to extract value from the data analytics life cycleGrant Thornton LLP
Regulatory mandates driving transparency and financial objectives requiring accurate understanding of customer needs have heightened the importance of data analytics to unprecedented levels making it a critical element of doing business.
Jayant Tondon has over 17 years of experience in banking operations, digital transformation, product management, and consulting. He is currently seeking a position with a reputed organization in banking, financial services, or consulting. His experience includes transforming business processes, product implementation, and managing teams in the banking sector across multiple countries.
The curriculum vitae provides information on Elmer L. Prietos, including his personal details and work experience as a maintenance mechanic for various companies over 13 years, with a focus on industrial equipment such as generators, pumps, and mining machinery. He also lists his educational background of vocational and secondary school, and provides details on training courses completed in areas like automotive trade and mining equipment.
This document provides a resume for John Mangawang, including personal details, education history, skills, work experience, affiliations, and interests. John graduated from Technological Institute of the Philippines in 2010 with a Bachelor's degree in Electronics Engineering. He has over 10 years of experience in electronics engineering, software support, and technical roles, including positions at SMART Telecommunications, MCI Global, and GXS. His skills include EDI software, mapping, testing, networks, trading platforms, and customer service.
I’m a highly creative Web Designer with experience in both the public and private sectors.
I have an advanced knowledge of user experience and I’m looking to develop my career in a supervisory role focusing on high profile projects.
The document discusses the real-life environment of RF receivers and challenges in testing receivers. It describes common interference types receivers may encounter, such as multipath interference which can cause constructive or destructive signal interference. It then discusses standard equipment for testing receivers, including channel emulators and signal generators, which can model complex modulation and multipath but require expert users and are expensive. Finally, it covers commercially available RF recorders, their specifications, limitations, and examples of use cases for validating RF broadcast and telecom receivers.
React Native Taipei Meetup #1 discussed why developers are now meeting up to discuss React Native. React Native allows building mobile apps using only JavaScript by compiling code to native iOS and Android apps. It has three pillars - touch handling, native components, and flexible styling and layout. Developers can write one codebase using JSX and ES6 that compiles to both platforms, simplifying development by avoiding maintaining two separate codebases for iOS and Android. Features like cascading updates, declarative code, flexbox styling, network requests and tab bars were also highlighted.
IL CAMMINO DELLA RIFORMA DELLA GIUSTIZIA
LE SCHEDE CHE SEGUONO RAPPRESENTANO IL BILANCIO E LO STATO DI AVANZAMENTO DEI
LAVORI A 1 ANNO DAL VARO DELLA RIFORMA.
CHI GOVERNA VA VALUTATO IN BASE AI FATTI PRODOTTI E AI RISULTATI CONSEGUITI.
IN QUESTO REPORT IL MINISTERO DELLA GIUSTIZIA «DÀ CONTO»
DI QUESTO LAVORO E OFFRE A TUTTI LA POSSIBILITÀ DI VALUTARLO.
IL CAMMINO DELLA RIFORMA NON SI FERMA OVVIAMENTE QUI,
GIÀ NELLE PROSSIME SETTIMANE ALTRI IMPORTANTI PROVVEDIMENTI E AZIONI ORGANIZZATIVE
VEDRANNO LA LUCE.
Basavanagowda M C has over 6 years of experience in electrical harness detailing and installation drawing creation, sheet metal design, A-bracket installation drawing creation, deviation analysis, overhead panel drawing creation, technical note creation and updating, and QC1 for A350. He has worked with companies including Airbus, Alten Technologies, L&T, and Tool Craft on aircraft programs such as the A400M, A350, and A380. He is highly skilled in CATIA V5, Enovia-VPM, PDM Link SSCI, AutoCAD, TREND, ZAMIZ, GILDA, and MS Office.
The document outlines standards for hospitals and healthcare providers developed by the National Accreditation Board. It discusses that standards are developed based on multiple information sources and are organized around important hospital functions with a focus on patient and staff safety. The standards set minimum requirements for accreditation and are periodically revised. There are 10 chapters covering 102 standards and 636 measurable elements that organizations must meet to be accredited. Sections cover patient-centered care standards and organization-centered standards such as quality improvement and facility management.
Victoria Tasmania Maximo User Group - May 2016Helen Fisher
This document discusses the benefits of mobile data collection including reducing data entry errors, working smarter and safer with data in hand, capturing more data, providing immediate updates and receiving feedback. It also lists process improvements such as better work management, improved service levels, better inventory tracking and lower operating costs. Finally, it notes the business impact and provides tips for choosing a device, starting simple, continuous application delivery, end-to-end security, and piloting first.
Maximo has traditionally been a very powerful maintenance management tool, but with pressures from a new younger workforce and competing products, there are a lot of new and exciting investment areas coming up for 2016.
Maximo and a roadmap for your IoT journeyHelen Fisher
For IBM customers, the Internet of Things (IoT) enables businesses to improve operations, rapidly connect devices and to lower costs. This is why IBM Maximo Asset Management now sits neatly in the Watson IoT portfolio. There are many business cases out there today for linking IoT and Maximo, IBM are not, however, diverting from their core value statements. Maximo is still about understanding asset availability, preventing failures, maximising resources, increasing reliability, understanding inventory needs and costs, and plant safety. Check out the key investment areas for 2016 and beyond.
Data Lake-based Approaches to Regulatory-Driven Technology ChallengesBooz Allen Hamilton
The document discusses how a data lake approach can help financial institutions address regulatory challenges more effectively than traditional ETL approaches. A data lake allows raw data to be ingested rapidly and indexed as needed for analysis, reducing preparation time. It also enables unified queries across all data sources and quick fusion of multiple sources. This significantly reduces operational complexity and costs while improving security, flexibility, and the ability to address evolving requirements. The data lake approach is well-suited for challenges involving streaming analytics, point-to-point data marts, or data-heavy ETL requirements. Booz Allen has successfully implemented this approach for government clients to prototype solutions around critical applications.
All over the world, water utilities must to face daily multiple difficulties in order to ensure water supply and sewer service to the citizens. Most of these problems are essentially related with technic and technology. Others with processes, organizations and the rest of the Industry’s implied actors. Others with regulatory issues and the administrative tangled mess around water. And so on… we could follow identifying more factors that, in one way or another, can represent a problem during the critical water supply process
Overlooked aspects of data governance: workflow framework for enterprise data...Anastasija Nikiforova
This presentation is a supplementary material for the article "Overlooked aspects of data governance: workflow framework for enterprise data deduplication" (Azeroual, Nikiforova, Shei) presented at The International Conference on Intelligent Computing, Communication, Networking and Services (ICCNS2023).
Abstract of the paper: Data quality in companies is decisive and critical to the benefits their products and services can provide. However, in heterogeneous IT infrastructures where, e.g., different applications for Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), product management, manufacturing, and marketing are used, duplicates, e.g., multiple entries for the same customer or product in a database or information system, occur. There can be several reasons for this, but the result of non-unique or duplicate records is a degraded data quality. This ultimately leads to poorer, inefficient, and inaccurate data-driven decisions. For this reason, in this paper, we develop a conceptual data governance framework for effective and efficient management of duplicate data, and improvement of data accuracy and consistency in large data ecosystems. We present methods and recommendations for companies to deal with duplicate data in a meaningful way.
SEAMLESS AUTOMATION AND INTEGRATION OF MACHINE LEARNING CAPABILITIES FOR BIG ...ijdpsjournal
The paper aims at proposing a solution for designing and developing a seamless automation and
integration of machine learning capabilities for Big Data with the following requirements: 1) the ability to
seamlessly handle and scale very large amount of unstructured and structured data from diversified and
heterogeneous sources; 2) the ability to systematically determine the steps and procedures needed for
analyzing Big Data datasets based on data characteristics, domain expert inputs, and data pre-processing
component; 3) the ability to automatically select the most appropriate libraries and tools to compute and
accelerate the machine learning computations; and 4) the ability to perform Big Data analytics with high
learning performance, but with minimal human intervention and supervision. The whole focus is to provide
a seamless automated and integrated solution which can be effectively used to analyze Big Data with highfrequency
and high-dimensional features from different types of data characteristics and different
application problem domains, with high accuracy, robustness, and scalability. This paper highlights the
research methodologies and research activities that we propose to be conducted by the Big Data
researchers and practitioners in order to develop and support seamless automation and integration of
machine learning capabilities for Big Data analytics.
SEAMLESS AUTOMATION AND INTEGRATION OF MACHINE LEARNING CAPABILITIES FOR BIG ...ijdpsjournal
The paper aims at proposing a solution for designing and developing a seamless automation and integration of machine learning capabilities for Big Data with the following requirements: 1) the ability to seamlessly handle and scale very large amount of unstructured and structured data from diversified and heterogeneous sources; 2) the ability to systematically determine the steps and procedures needed for
analyzing Big Data datasets based on data characteristics, domain expert inputs, and data pre-processing component; 3) the ability to automatically select the most appropriate libraries and tools to compute and accelerate the machine learning computations; and 4) the ability to perform Big Data analytics with high learning performance, but with minimal human intervention and supervision. The whole focus is to provide
a seamless automated and integrated solution which can be effectively used to analyze Big Data with highfrequency
and high-dimensional features from different types of data characteristics and different application problem domains, with high accuracy, robustness, and scalability. This paper highlights the research methodologies and research activities that we propose to be conducted by the Big Data researchers and practitioners in order to develop and support seamless automation and integration of machine learning capabilities for Big Data analytics.
Keys to extract value from the data analytics life cycleGrant Thornton LLP
Regulatory mandates driving transparency and financial objectives requiring accurate understanding of customer needs have heightened the importance of data analytics to unprecedented levels making it a critical element of doing business.
Using Predictive Analytics to Optimize Asset Maintenance in the Utilities Ind...Cognizant
Predictive analytics is a process of using statistical and data mining techniques to analyze historic and current data sets, create rules and predict future events. This paper outlines a game plan for effective implementation of predictive analytics.
Data and the enterprise mission: putting data at the corecorfinancial
Data matters to Financial Services firms. It is their stock-in-trade, a strategic asset that without an accurate and timely data set they cannot operate effectively, they cannot price risk fully and their capital allocation calls are unlikely to be optimal. Data is the ultimate collateral of these firms. For many, it requires a transformational change in their systems, technology and processes How then do you embed strategic data into your enterprise architecture?
Read 2 minute guide
Stream Meets Batch for Smarter Analytics- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
The paper discusses how the traditional batch and real time paradigm can work together to deliver smarter, quicker and better insights on large volumes of data picking the right strategy and right technology.
This document summarizes key points from an article that challenges common assumptions about supply chain metrics and operational success. It makes the following key points:
1. Minimizing unit cost is often assumed to maximize ROI, but this "deep truth" is actually false and can impede organizational effectiveness.
2. True operational success depends more on maximizing the speed and smoothness of material and information flows throughout the supply chain.
3. Common metrics like unit cost calculations are inappropriate for decision making because they do not capture the complex, nonlinear behavior of supply chain systems.
The document discusses the need for a new web-based financial modeling platform to address risks and issues with traditional spreadsheet-based modeling. Such a platform would provide compliance with regulations, assurance of data integrity, return on investment, and efficient processes. It would formalize modeling artifacts and transactions to speed up analysis, simplify communication, and reduce errors and fraud compared to error-prone spreadsheets. Adopting such a platform could help organizations gain competitive advantages through more accurate and timely insights.
Whitepaper des Herstellers zum Thema Collect, Transform,Generate and Test
MetaSuite and HP Quality Center Enterprise, generating Test Data
from any data source from any platform, including mainframe
Kontakt: http://www.Minerva-SoftCare.de
This document provides guidance on establishing a systematic process for handling records in a bespoke marine service contract. It recommends developing bespoke systems to accommodate new methodologies and data, while following general principles like securely backing up and identifying all raw and processed data. An example project involving remote seabed drilling, coring, and sampling at depths up to 2500m is described to illustrate implementing file naming conventions and storage hierarchies to efficiently organize large amounts of technical and contractual data from multiple sources over the project duration.
Governance and Architecture in Data IntegrationAnalytiX DS
This document discusses starting a data governance program in an agile way using AnalytiXTM Mapping ManagerTM. It describes AnalytiXTM Mapping ManagerTM as an enterprise mapping tool that can manage all metadata related to data integration projects, including documenting mappings, business rules, and providing traceability and auditability of data. Implementing AnalytiXTM Mapping ManagerTM can help satisfy regulatory compliance needs like those in the Sarbanes-Oxley Act by providing a centralized metadata repository and standardizing processes. Starting a data governance program with AnalytiXTM Mapping ManagerTM can help address metadata management gaps and jumpstart governance in a flexible manner.
White Paper-1-AnalytiX Mapping Manager-Governance And Architecture In Data In...AnalytixDataServices
The document discusses starting a data governance program in an agile way using AnalytiXTM Mapping ManagerTM. It describes AnalytiXTM Mapping ManagerTM as a tool that can help address metadata management gaps, document data mappings and rules, and establish data stewardship to enable regulatory compliance. Implementing AnalytiXTM Mapping ManagerTM allows jumping starting a data governance program by providing standardized metadata management, version control, and data lineage tracing across data integration projects.
This document discusses several modern trends in information systems including online and real-time information systems, OLAP, data warehousing, data mining, business intelligence, business analytics, and knowledge management. It provides details on data warehousing such as how it combines data from multiple sources into a single database. It also discusses OLAP, data mining, business intelligence, and business performance management including key performance indicators, scoreboards, and dashboards.
Operationalizing Big Data to Reduce Risk of High Consequence Decisions in Com...OAG Analytics
This white paper presents compelling alternatives to bivariate analysis, i.e. XY or scatter plots, for generating data-driven insights that can reduce risk in complex systems. It explores under what conditions businesses maximize value by relying on computers to make decisions versus using computers to help humans make better and/or faster decisions. The main body of the paper attempts to create a holistic view of why and how to use contemporary data technologies to create actionable insights from large and complex data. The Technical Appendix elaborates on the requisite capabilities of an end-to-end workflow to transform raw data into actionable insights using advanced analytics.
Best Practices - Mission-Critical SystemsJoe Guido
The document discusses best practices for mission-critical systems, including ensuring accuracy of data through rigorous data validation rules, providing transparency of processes by electronically recording all user actions, and achieving fiscal accountability by validating and controlling the movement of data. It also covers sustainability through streamlining processes into electronic workflows, and the use of event management to electronically capture all data movements for tracking, analysis and transparency.
Real Semantics is a product designed with BCBS 239 compliance in mind. It uses a universal graph model and common data model to trace decisions made by systems. It can peer into legacy systems at different levels. This synchronization of data takes chaos out of IT systems. Real Semantics satisfies BCBS 239 requirements such as establishing integrated data taxonomies, ownership and quality of risk data, and capabilities to generate risk data subsets quickly. While many organizations struggle with regulations, Real Semantics sees it as an opportunity to improve systems to satisfy customers and grow business.
Safety & Asset Integrity Excellence - A Study of Three Mile Island
Data Mining and Offshore Maintenance Management
1. billybuckenham@gmail.com Page | 1
Data Mining in Offshore Maintenance – Maintenance, Compliance,
Plant Performance and Assessing the Risk
Introduction
This document discusses issues which face the offshore industry as mature assets
and fields move beyond their original life expectancy, and looks at simple techniques assist
LOPA based maintenance (Level of Protection Analysis), and to identify the result of actions
taken or not taken in the past and their resultant impact on safety and revenue streams. It is
essential for any Asset owner or Duty Holder’s to be able to extract such information from
the data because:
1. It gives them a clear understanding of both current and historical regulatory
compliance on an asset. With the implementation of Fee For Intervention and the
subsequent strengthening of the HSE’s role as an enforcer, it is even more important
that the DH is aware of any breaches of compliance before a safety critical risk
becomes the actuality of an incident, with possible health/environmental outcomes
and subsequent legal action against company and management. This ability to
identify the breach (which may have been the result of a previous owner decisions),
may have degraded integrity and/or performance and effected the associated risk
analysis and require an amended maintenance regime to correct. There is no
indication that the HSE will move in the direction of prosecution for significant
historical compliance infringements unless they result in a serious incident, but with
modern techniques the potential for extracting such information exists and owners
and duty holders would be prudent to be aware of the possibility and take steps to
reduce their exposure.
2. On all assets there is a direct correlation between plant performance and correct
maintenance regimes. Without trended information on the success of achieving
maintenance within the target schedule there can be no clarity on the reasons for
failure or ways of predicting failure probability, calculating correct downtime for
maintenance outages v breakdown outages, or reassessing the trade-off between
planned maintenance costs and unplanned breakdown costs.
3. As assets age and move outwith their original operating envelope due to equipment
change, product change or procedural change, the task of assessing the risk to
people, the environment or production has become more problematic. Various
procedural methods to do this exist, but they all rely upon the probability of an event
happening. Without accurate information, the calculation of that probability could be
significantly incorrect. The ability to extract that accurate information from all the data
is therefore important for safe, efficient and cost effective operation of a plant.
Although the examples discussed primarily use a high level reporting function
to extract the information, the document will conclude by discussing the need for medium
and low level reporting and how essential it is for the industry that all the various
Computerised Maintenance management Systems (CMMS) follow Best Practice. It will
also discuss the problem of companies upgrading to newer and more powerful software
without fully understanding the need for maintenance driven implementation rather than
2. billybuckenham@gmail.com Page | 2
software driven, and the risk caused by a lack of regulatory standards in the way these
systems are created and populated.
All the data used in this discussion is real, but with asset identification removed
to ensure anonymity. As attaching the very large files of raw data would make this
document very unwieldy, only the results of the data mining or extracts of the raw data
have been included. Note that the nature of this issue means no documented information
or research exists, as no company has dedicated the resources or money to the study
and then made public its own non-compliance.
Methodology
Where regulatory audit and maintenance analysis takes place it is still for the large
part rooted in the original methodology and technology of the 80s, when first generation
Computerised Maintenance Management systems were very basic and being introduced on
a bespoke basis, and hard paper copies were the principle means of data storage. Over the
years we have progressed to a point where all the different types of CMMS have gradually
been supplanted by a few extremely powerful pieces of software, with common database
relationships and entities such as tag numbers, planned maintenance routines, condition for
work, etc., or stored information as spreadsheet registers and documents (which are
themselves often treated as an extremely flat type of database). This is true globally in many
different industries, businesses and societies, and new techniques and disciplines have
been developed to deal with the huge amount of data created and extract pertinent
information from it. In the case of the offshore world, if Best Practice is followed the potential
exists to extract information by applying crude data mining techniques to maintenance
operations and regulatory records to ensure an asset is continuously operating both safely
and efficiently, achieving maximum up-time at minimum cost and minimum resource
overheads.
Within this document the term “data mining” is used in its most crude sense, in that
while it is still the analysis step of the “Knowledge Discovery in Databases” process, it does
not involve the machine learning, artificial intelligence techniques or complex statistical
analysis normally associated with that term. Our case is like many similar situations, where
“domain knowledge” is the key to successfully achieving the required result; by this we mean
a first-hand, in-depth knowledge of the industry and how it conducts itself offshore, the
dynamics of its operations, where the data is kept and the likely custodians, and how to
cross-correlate that data and extract pertinent information. With this approach we may not
necessarily be able to identify or find the answer to all the issues, but we will discover the
right questions to ask and where to direct the query.
By far the largest percentage of data to be mined will be held within databases,
whether it is maintenance management systems, materials and manifesting systems,
ISSOW systems, or rudimentary MS Access databases. Inherent in all of these is the ability
to extract the data in a structured way and generate reports by creating a dataset. This is
applicable whether the reporting is done via “Oracle Discoverer”, “BIRT Reports”, or any
other of the available reporting packages, and is simply a statement of all the variables
required for given circumstances that returns all the associated values. Thus a dataset could
be created for ‘ANYTAG History’, variables = ‘Tag Number’, ‘History Summary’, ‘Full Desc
History’, ‘Due Date’, ‘Completion Date’. The dataset could then be queried at any time for
any tag or system of interest and the results exported in Excel or CSV format so the raw
3. billybuckenham@gmail.com Page | 3
data could be mined and analysed as required. As will be demonstrated, if the CMMS has
been correctly set up and information is being correctly recorded, even these basic details
can supply a wealth of information in tabular form. From this information textual anomalies
may be detected, basic analytical functions performed, and graphs produced that will reveal
information buried within the data, making visible issues which may be adversely affecting
safety and profitability.
Completion Clustering
PM routines with a frequency higher than 6 monthly can reveal interesting
characteristics when their completion dates are viewed in graphical form. Figure 1 shows a
graph that has been created by looking at the dates when a 3 monthly PM was signed off as
completed over the last 15 years, then increasing the breadth of the bars to accentuate
when several routines were signed off at the same time. The closer the completion dates are
to the due dates, the smoother will be the curve as it follows the planned 3 monthly interval.
However, in this case very marked steps appear, indicating when several PM routines have
been signed off at the same time to create a “completion cluster”, with a slight ‘S’ shape to
indicate a more prolonged period of non-compliance (in this case coincident with new
owners taking over as duty holders).
Figure 1 - Completion Clustering
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
11/8/87
7/5/90
31/1/93
28/10/95
24/7/98
19/4/01
14/1/04
10/10/06
6/7/09
1/4/12
27/12/14
3 Monthly PM on SCE Showing Completion Clustering
4. billybuckenham@gmail.com Page | 4
This is a PM on a SCE called “Fire Ringmain Flushing and Integrity” that at one time
was an Assurance Routine, so concerns are immediately raised about historical non-
compliance. A closer inspection of the database reveals that until very recently poor and
insufficient history has been entered, which prompts further concerns about work actually
being done or simply signed off ahead of an ICP/HSE visit. It may well be that the PM
actually has no value to the platform, but there is no indication in the database to suggest an
engineering review has been done to support this. In this particular case further investigation
revealed that the principle function was biocide treatment of the fire ringmain that had not
been done since changes in environmental regulations made the flushing impossible to do
without being in breach on environmental regulations. An integrity survey was done on the
fire ringmain to ensure the metal was still in good condition, and when this was proved to be
the case a LOPA and engineering review concluded that this PM routine was no longer
required and it was accordingly inhibited.
Keyword Search
This is a very basic approach that simply uses the inbuilt “Find” capability of
spreadsheet software. Downloading raw data on all PSVs on one particular asset produced
1080 rows of information with 15 columns – 16200 separate items. Searching within this for
keywords commonly used offshore when entering data can indicate areas of regulatory
concern, provide focus for any investigation, and indicate to Duty Holders possible areas of
higher fiscal risk.
Within this raw data the word “Removed” appears 48 times in the tag description
column of the PSVs, but completion dates showed they were still being recertified at the time
the data was download. A further 16 were due in 2006 and the work orders eventually
signed off in 2010 with a note in history that they had been removed and were no longer in
the controlled copy of the PSV register. This clearly shows that historically the duty holder at
that time had not correctly followed any management of change procedure, as the asset
register should have been updated and the PM routines inhibited for any equipment which
has been removed. It also raises concerns that associated P&IDs and C&Es had not been
updated as modifications had been made to the plant and its mode of operation. Having this
knowledge gave the current owners clearer information on PSV status, enabled them to
manage the planned PSV workscope for their next shutdown in a more targeted manner,
and precluded any legal action against themselves resulting from the actions of the previous
owner.
“Cancelled” appeared 9 times against a Sandfilter system with a comment that the
package was out of service and no longer used. The asset register showed this had not
been made redundant, so there was increased concern about that duty holder’s
management of change procedure - was it adequate and was it being followed, and how
does this affect the current Duty Holder’s operations. As any engineering changes, whether
modifications or mothballing/redundancy, will be held in a separate register – possibly more
than one if different technical authorities have responsibility for different core projects, so the
key indicator of compliance was therefore the correlation of dates between registers and
databases:
• What date was the project completed (register)?
• what date was the asset register and pertinent maintenance routines
reviewed and updated, and by whom (MMS database)?
5. billybuckenham@gmail.com Page | 5
• what date is the latest rev of the P&IDs and any relevant C&Es, and do the
timelines for these and earlier revs agree with the project completion (duty
holder’s Document Control)?
It also raised concerns that it was possible to cancel an SCE Assurance routine work
order on equipment which is still live in the asset register, and why the duty holder’s
management system allowed this to happen – was it inadequate management control, an
inadequate management system, or lack of competency on the part of those involved? All
required further investigation, but the key issue was that while HSE resources could be
targeted in a dynamic response to known non-compliance indicators with a high probability
of prosecution and application of FFI, the current Duty Holder had already identified the
issue, proved that they were not the ones who had breached regulations, and taken steps to
address the issue..
Sometimes a keyword search can return a result where the number of matches is a
cause for concern even before a more in-depth analysis is performed. A download of the raw
data for all PSVs on another asset for the same period returned 1126 rows, and a search for
“cancelled” returned 268 rows. There may have been a valid reason why 24% of work orders
had been cancelled, but it does imply there is either scope for confusion or poor
management of pressurised systems - which may have been a contributory factor in this
particular asset being responsible at a later date for a large uncontrolled gas release.
Discrepancy Trending
The discrepancy referred to is the difference between the due date of a PM routine
and the date it was actually completed. The discrepancy between these two dates can be
very revealing, but great care must be exercised on how the data is interpreted and further
questions would need to be asked and more in-depth research done before any judgements
were made for either compliance or maintenance performance.
Figure 2 shows the discrepancy trend for the 12 monthly ESD trip testing on a surge
tank – part of the produced water processing system on a mature asset. The period covered
is from 1997 to 2011 and covers two different duty holders. The trend for the first 5 years
shows oscillations about zero, which is what you would expect as completion is ‘tweaked’ to
match planned shutdowns of the system and allow full testing (it also shows why we cannot
do a statistical analysis by measuring variance or standard deviation – for compliance
requirements critical information is also included in the negative values). However, 4 of those
years has the PM being completed within the same week it became due (a highly unusual
occurrence), and as the history entered is of such poor quality it raises concerns that that the
work was not actually done but simply signed off during these lean years of low maintenance
resources.
6. billybuckenham@gmail.com Page | 6
Figure 2 - Discrepancy Trending
Following this period a new duty holder takes over and there are wider fluctuations
but increased quality of history, up to and including 2009. Initial reaction to this is that the
new DH is not managing this Assurance routine effectively. However, a closer examination
shows that while there is a discrepancy between due date and completion date, the intervals
between completions are not untoward if the risks have been correctly assessed, so it
appears that offshore at least are complying with both spirit and actuality of requirements.
There is another interpretation which fits this scenario – there may be no Condition of Work
for this routine i.e. there is hardware redundancy within the system and production does not
need to be shut down while the SCE equipment is exercised. If this is the case, then the
testing and history up to 2009 is correct, and failure to maintain bypass systems and
isolation valves by the current duty holder has resulted in poor maintenance and un-needed
fluctuations in completion dates.
The one unambiguous piece of information evident in this dataset is the extremely
significant delay in completing the PM routines due in 2010 and 2011. The work order history
records that there was an upgrade to the surge tanks in 2010 which made the PMR
incorrect, and offshore were still awaiting an updated routine over a year later – clearly a
major non-compliance with Management of Change procedures by Onshore which
significantly affected any LOPA assessments and increased the probability of an
environmental incident occurring. It also would also prompt further examination of relevant
P&IDs, C&E drawings, Plant Operating Manuals, the content and dates of the latest
revisions, and if offshore personnel have been sufficiently supplied with information and
documentation to operate this part of the plant effectively
Deferral Grouping, Scheduling and Cross-referencing
Appendix 1 shows all of a platform’s deferrals for 2011 that were still open in
November 2011, and it immediately becomes obvious that something is amiss and requires
further investigation: there are 4 work orders against PSV recertification which are due
Aug/Sep 2011, yet their WO number indicates they were actually generated in 2010.
-50
0
50
100
150
200
250
300
350
01/01/97
01/01/98
01/01/99
01/01/00
01/01/01
01/01/02
01/01/03
01/01/04
01/01/05
01/01/06
01/01/07
01/01/08
01/01/09
01/01/10
01/01/11
Days Discrepancy Trend - 12M ESD Trip Test
7. billybuckenham@gmail.com Page | 7
Furthermore, if we group and cross-reference against the asset register we can see that of
the 33 PM routines in deferral, 17 are against gas or gas related systems and of these all but
one failed its PM at least once in the last 3 routines. When such information is extracted from
the data, both the owner and the Inspectorate would be in a more informed position and
would question why there is a seeming reluctance to do PSVs on the gas system, and what
the cumulative risk is to the platform when so much assurance work is overdue.
Another point to gain from figure 3 is how important it is to have the dataset
requirements carefully defined before the software query is run. This dataset does not
contain previous deferrals which have been raised and closed out, so we cannot look at the
history for these systems and tags and see if there is a common theme of not performing the
routines, nor do we have full details of descriptive comments which would give us
information either supporting or rebuffing the duty holder’s reasons for the deferral. As the
whole point of data mining is having the ability to extract information, being presented with a
mass of data is not an issue if the correct techniques are used so we need to be supplied
with full and comprehensive raw data.
Free Text Validation and Cross-referencing
This is one of the most time-consuming and onerous of data mining tasks in the
offshore environment. Implicit in the design of any database is the requirement for field and
record validation: a set of rules which help give a relational database its power by controlling
table construction and data entry (for example, only allowing certain failure code options in
Synergi, or only the pass/fail/fail-fix options in an CMMS history). This is one of the
properties which makes database analysis possible, and enables a duty holder to interrogate
their records for reliability and failure rates on specific SCEs. Unfortunately, during the time
of low oil prices in the 1990s some companies reduced their operating costs and PM backlog
by cancelling preventative maintenance on safety critical components and instigating
functional testing only: they then inhibited some individual PMRs and listed all the safety
critical valves and initiators in the free text work instructions on one master PM. At the time
this was seen as acceptable because reliability measurement would be against a system,
and failure of any one safety critical component would be a failure for the entire PM,
therefore a deferral assessment would always come out as a higher risk and urge a faster
resolution. In practice it has had the opposite effect, as it has taken some safety critical
elements out of the database validation process and they no longer appear in any fields,
thereby preventing accurate forecast of a component’s likelihood of failure, and making
management of change and inspection of compliance very much harder. Appendix 2 shows
the work instructions for a level 3 ESD trip assurance PM. The last 4 isolation valves on the
water injection wells are not even listed in the asset register, so it is impossible to record any
history whatsoever against these SCE valves. The most probable reason this came about is
that the function of those wells has changed at some time following a workover, and not all
records have been amended. Further inspection would be needed to determine when this
was done, why this PM has been signed off in the past when its content is clearly wrong,
which tags are now the relevant ones, and if there is another routine somewhere to ensure
correct operation of the equipment if there is an incident.
Schedule Compliance
As duty holders are currently placing great stress on schedule compliance, this
section is included to address common misunderstandings and to demonstrate some of the
8. billybuckenham@gmail.com Page | 8
pitfalls of taking data at face value without understanding that it is a virtual representation of
a real world scenario and can generate distortions of reality.
In the onshore world there are many and contradictory opinions on what schedule
compliance should be used for, how it should be applied, and what timescale it should
incorporate. This is even more the case offshore, where different operators and duty holders
both past and present evolved different ways of working and attempts to introduce industry
wide ways of working continue to defeat the best of intentions. Off shore is also unique in
that each asset can be viewed like another planet – a totally encapsulated and complex
world that can function autonomously provided all inputs and outputs are met. However, the
functioning of these inputs and outputs are often outside platform control, and their failure to
operate according to plan cannot be predicted and may be crucial both directly and indirectly
to operations.
• An asset has 75% resource hours scheduled for one week but only achieved 40%: is
this bad planning? No, in this case bad weather disrupted the flying programme so
14 maintenance personnel spent 4 hours on 2 separate days dressed ready for going
home before the flying was cancelled, therefore resources were not available to
liquidate the planned work.
• Work is scheduled to be coincident with a pipeline outage that doesn’t take place
because of issues on another platform – again the planned work for that week will not
be done and compliance will appear to be poor, when in fact more important yet non-
scheduled tasks were done instead.
These are emergent properties of the offshore industry and show how external
timeframes can impact on schedule compliance. Clearly a weekly compliance KPI, as used
by many offshore operators, is actually of little value.
Suppose an asset regularly gets 80% compliance week upon week. Is this asset
doing well? The key point is not that it has achieved 80% compliance, but that it has 20%
non-compliance and we have no idea what work was scheduled but didn’t get done. There
may have been assure routines on SCE items which were already 3 weeks overdue, or
repair work associated with a RAR – without the detail to go with the figures and a focus on
non-compliance nobody has a clear idea of how well the asset is performing.
There will never be any way of measuring performance which meets everyone’s
requirements, but just looking at these two scenarios we can see that from the HSE
perspective they would need to break down any compliance figures according to their safety
critical attributes to get a true indication of how work is being prioritised. A weekly
compliance is clearly of little value, as it is more important to see that the asset rescheduled
and reprioritised to deal with unexpected events in the correct way rather than if they did on
the Thursday what they had said they would be doing several days previously. As most PM
routines have a frequency of 3 months or lower, with higher frequency routines normally
being operational checks, by applying the above conclusion and Shannon’s Sampling
Theorem we need a schedule compliance period of less than 6 weeks and more than 1
week – which suggests aligning with the 28 days overdue rule which requires an Assurance
routine to be either done or risk assessed and a deferral raised. Therefore for schedule
compliance to have any real use and validity , it needs to be monthly and focussed on non-
compliance and task criticality.
9. billybuckenham@gmail.com Page | 9
Conclusion
In the years since the Cullen Report the offshore industry has changed far beyond
what it was, and what it was predicted to be. At the same time as material aspects have
been changing, so have new ideas, knowledge and expectations had to be incorporated into
the everyday life of the industry and its people. The Macondo blowout and events on the
Elgin highlighted the potential for major incidents within the industry, while at the same time
installations both old and new have had to respond a changing product and new regulations.
The principal tool for management of maintenance in a relational database which
may come in various proprietary forms – SAP, Maximo etc. The power of a relational;
database lays in the structured way that the data is organised and linked, and the ability to
extract meaningful information from that data by means of reports. Many different user
groups will have the need to both input data and extract that information, and every user
group has the same importance in the successful operation of the CMMS database – and
group importance can never be based on company hierarchy in any database, as the lowest
rung of the ladder has to understand the system and input correct data for the higher rungs
to be able to extract valid information. There will also be different levels of reporting required
for each user group. At the lowest level of reporting complexity will be those performing
tactical searches which may well be possible with the simple inbuilt capabilities of the
database (assuming it has been set up with that in mind). At the highest level will be those
extracting information to analyse safety and revenue stream performance, ensure
compliance and search for non-optimum maintenance regimes. In the middle will be those
who require information on matters falling between tactical and strategic e.g. shutdown
preparation or project related research.
Figure 4 demonstrates a mechanism for investigating regulatory non-compliance,
augmented by cyclical data mining of each asset’s maintenance databases and spreadsheet
records by the DH. This method will not only fill the gaps between the fixed point in-depth
inspection by the HSE and ICP, but historical non-compliance will be detected and duty
holders guided towards rectifying omissions of which they may be unaware (possibly as the
result of blind inheritance from a preceding duty holder). IncreasingComplexityof
Assurance
HSE
ICP
Addition of continuous and dynamically targeted in-depth compliance inspection to
existing procedure
Figure 4: Schematic of Compliance Investigation
Time
10. billybuckenham@gmail.com Page | 10
The information gathered from this technique can also be fed into the LOPA appraisal
process to give a more accurate assessment of PFD for a system (and also possibly extract
lost knowledge from a previous DHs records). It is expected that as such a programme rolls
forward, non-compliance in the past can be addressed as is deemed appropriate, and any
ticking time-bombs caused by historical non-compliance can be defused by preventative
inspection or action before they give rise to a major incident.
This document also agrees with the findings of KP3 about there being a poor
understanding of maintenance issues within the offshore industry, and believes that this is
evident at all levels and has defaulted to a tendency in many cases for maintenance to be
run by software set up by IT professionals who don’t fully understand the actuality of the
maintenance procedural system offshore. A CMMS database creation or enhancement team
would consist of IT professionals, maintenance personnel at all levels, management, in fact
all user groups, and would soon demonstrate that in its most basic role, the CMMS system is
a tool used by maintenance people for a particular task, just like a spanner or screwdriver,
and as such it must be fit for purpose: it would be folly to work in ways dictated by software,
just as it would be folly to use a spanner as a hammer because that’s what you were told to
do by a carpenter. It is unfortunate that there is no legislation to stipulate how a basic CMMS
should be constructed and used, as this would ensure correct usage, make evident non-
compliance and increase safety, as well as providing a means to increase productivity and
plant up-time. Until such time as a Best Practice Guide is produced for a maintenance driven
procedural regime, the ability to perform the in-depth inspection of existing records that is
required to demonstrate the disconnect between what is being reported and what is actually
happening, thus helping the industry focus more clearly. The information is already out there
to help us achieve a step change that will benefit everyone, both onshore and offshore, all
we need to do is take the first stride.
11. billybuckenham@gmail.com Page | 11
Appendix 1: Live Deferrals November 2011
Appendix 2: Work Instruction For a Level 3 ESD Trip Test
ESSENTIAL ISOLATION VALVES
Wellhead Wing Valves
10-XV-#### Production Wellhead Wing Close Pass/Fail
C1001 First Stage Separator
Valve Tag. No. Valve Description Failure Action
10-XV-1116 Oil Inlet from HP Header Close Pass/Fail
10-XV-1121 Oil Outlet Close Pass/Fail
10-XV-1120 Produced Water Outet Close Pass/Fail
10-XV-1118 Drain To Surge Close Pass/Fail
10-XV-1117 Flare Open Pass/Fail
10-XV-1119 Jet Wash Inlet Close Pass/Fail
C1002 Second Stage SeparatorValve
Tag No. Valve Description Failure Action
10-XV-2600 Oil Inlet from IP Header Close Pass/Fail
10-XV-1130 Oil Outlet Close Pass/Fail
10-XV-1004 Produced Water Outlet Close Pass/Fail
10-XV-1129 Drain To Surge Close Pass/Fail
10-XV-1126 Flare Open Pass/Fail
10-XV-1127 Jet Wash Inlet Close Pass/Fail
C1003 Third Stage Separator
Valve Tag No. Valve Descriprion Failure Action
10-XV-1500 Produced Water Outlet Close Pass/Fail
10-XV-1189 Drain to Surge Close Pass/Fail
10-XV-1182 Flare Open Pass/Fail
10-XV-1139 Jet Wash Inlet Close Pass/Fail
C1004 Test Separator
Valve Tag No. Valve Description Failure Action
12. billybuckenham@gmail.com Page | 12
10-XV-1181 Oil Inlet from Test Header Close Pass/Fail
10-XV-1185 Oil Outlet Close Pass/Fail
10-XV-1183 Produced Water Outlet Close Pass/Fail
10-XV-1189 Drain To Surge Close Pass/Fail
10-XV-1182 Flare Open Pass/Fail
10-XV-1187 Jet Wash Inlet Close Pass/Fail
10-XV-1184 C1001/C1004 Crossover Line Close Pass/Fail
G1002B/C Export Pumps and Piepline
Valve Tag No. Calce Description Failure Action
10-XV-1141 B Pipeline Pump Discharge Close Pass/Fail
10-XV-1142 C Pipeline Pump Discharge Close Pass/Fail
10-XV-1353 Oil Export Riser ESDV Close Pass/Fail
C1006A/B Surge Tanks
Valve Tag No. Valve Description Failure Action
10-XV-1149 Surge Outlet To C1002 Close Pass/Fail
K1301/2 and K1501/2 Gas Compression
Valve Tag No. Valve Description Failure Action
13-XV-1102 C1003 Outlet Close Pass/Fail
13-XV-1101 13-XV-1102 Bypass Close Pass/Fail
13-XV-1414 10 RVP Bypass Close Pass/Fail
13-XV-1106 C1303 Liquid Outlet Close Pass/Fail
13-XV-1103 K1302 Flare Open Pass/Fail
13-XV-1051 C1002 Outlet Close Pass/Fail
13-XV-1052 13-XV-1051 Bypass Close Pass/Fail
13-XV-1310 C1306 Liquid Outlet Close Pass/Fail
13-XV-1053 C1302 Liquid Outlet Close Pass/Fail
13-XV-1055 K1301 Flare Open Pass/Fail
13-XV-1002 C1001 Outlet Close Pass/Fail
13-XV-1001 13-XV-1002 Bypass Close Pass/Fail
13-XV-1005 K1301 Outlet Close Pass/Fail
15-XV-1001 K1501 Flare Open Pass/Fail
15-XV-1051 C1501 Liquid Outlet Close Pass/Fail
15-XV-1052 K1502 Flare Open Pass/Fail
15-XV-1003 K1502 Discharge Close Pass/Fail
15-XV-1500 HP Flare Open Pass/Fail
C1401 Gas Dewpoint
Valve Tag. No. Valve Description Failure Action
14-XV-1010 C1401 Drain To Surge Close Pass/Fail
14-XV-1014 HP Flare Close Pass/Fail
14-XV-1007 C1401 Flare Open Pass/Fail
14-XV-1006 Liquid Return To C1003 Close Pass/Fail
14-XV-1008 Glycol Outlet Close Pass/Fail
V1501 Gas Import/Export System
Valve Tag No. Valve Description Failure Action
15-XV-5398 Gas Import Metering Sample CLose
15-XV-1219 Gas Import/Export ESDV Bypass Close
15-XV-5206 Gas Import to Dewpoint Close
15-XV-5208 Gas Import to Gas Lift Close
15-XV-5250 Gas Import to Gas Lift Bypass Close
15-XV-5205 Gas Import to Fuel Gas Close
15-XV-5294 Gas Import/Export Pipeline Blowdown Open
15-XV-5212 Gas Import Scrubber Drain Close
15-XV-5235 Gas Import Heaters Inlet Close
15-XV-5199 Gas Import Heates Bypass Close
15-XV-5202 Gas Import 1st PCHE Blowdown Open
15-XV-5203 Gas Import Heaters Interstage Close
15-XV-5234 Gas Import 2nd PCHE Blowdown Open
15-XV-1218 Gas Import/Export ESDV Close
15-XV-1220 Gas Import/Export Pipeline Purge Close
15-XV-1225 Gas Import/Export SSSV V3 Close
15-XV-5295 Gas Export Metering Inlet Valve Close
15-XV-5296 Gas Export Metering Inlet Bypass Close
15-XV-5397 Gas Export Metering Sample Valve Close
15-XV-5292 Gas Export Metering Blowdown Open
15-XV-5209 Gas Export Metering Outlet Valve Close
15-XV-5390 Gas Export Metering Outlet Bypass Close
13. billybuckenham@gmail.com Page | 13
System 50 Fuel Gas
Valve Tag No. Valve Description Failure Action
50-XV-1036 A' Rolls Royce Gas Supply Close Pass/Fail
50-XV-1035 B' Rolls Royce Gas Supply Close Pass/Fail
50-XV-1002 Fuel Gas System Supply Close Pass/Fail
50-XV-1005 C5002 Filter/Seperator Flare Open Pass/Fail
50-XV-1008 C5001 Scrubber Flare Open Pass/Fail
50-XV-1011 C5001 Scrubber Liquid Outlet Close Pass/Fail
50-XV-1012 C5002 Filter/Seperator Liquid Outlet Close Pass/Fail
50-XV-5279 Fuel Gas Scrubber Drain to C1003 Close
50-XV-5298 Fuel Gas Scrubber Drain to Surge V1505 Gas Lift Open
Valve Tag No. Valve Description Failure Action
15-XV-1101 Gas Lift System Intlet Close
15-XV-1102 Gas Lift System
15-XV-1101 Bypass Close Pass/Fail
15-XV-1718 Gas Lift Flare Open Pass/Fail
15-XV-1919 Well M18 Inlet Close Pass/Fail
15-XV-2219 Well M42 Inlet Close Pass/Fail
15-XV-2119 Well M47 Inlet Close Pass/Fail
15-XV-1600 Well M48 Inlet Close Pass/Fail
15-XV-2319 Well M51 Inlet Close Pass/Fail
15-XV-5399 Well M54 Inlet Close Pass/Fail
Water Inejection Wellheads
Valve Tag No. Valve Desctoprion Failure Action
31-XV-1337 Well M03 Wing Valve Close Pass/Fail
31-XV-1155 Well M06 Wing Valve Close Pass/Fail
31-XV-1125 Well M08 Wing Valve Close Pass/Fail
31-XV-1113 Well M11 Wing Valve Close Pass/Fail
31-XV-1107 Well M16 Wing Valve Close Pass/Fail
31-XV-1313 Well M21 Wing Valve Close Pass/Fail
31-XV-1143 Well M23 Wing Valve Close Pass/Fail
31-XV-2902 Well M35 Wing Valve Close Pass/Fail
31-XV-2914 Well M39 Wing Valve Close Pass/Fail
31-XV-1349 Well M56 Wing Valve Close Pass/Fail
Definitions
Asset Register A list of all equipment and vessels on an installation, including whether it is a
SCE, what system it belongs to, whether operational or mothballed etc. (also
called a ‘Tag Register’)
Completion Date The date that a work order is completed and signed off as such within the
CMMS (may be called ‘Actual Finish Date’ in some systems). Not to be
confused with ‘Committal’ or similar descriptions, which indicate the work
order has passed supervisor audit and has been placed into History.
CMMS Computerised Maintenance Management System. A database which is used
to manage equipment maintenance and record work done. Offshore the
main systems are SAP and Maximo, but others are still in use.
Data Mining The analysis step of extracting information from a very large mass of data.
The information is then available for statistical analysis or techniques such
as dry laboratories (the process of cross-referencing information from
separate databases in computer generated models)
Dataset A set of data returned from a database when a software query is run.
14. billybuckenham@gmail.com Page | 14
Due Date The date a Planned Maintenance routine is due to be completed. May also
be called ‘Target Finish Date’
Discrepancy The difference between the due date and the completion date (as used
within this document)
PSV Pressure Safety Valve. A mechanical safety device that is designed to
operate at a set pressure.
Bibliography
1. Maintenance System Assessment: Guidance Document HSE RR237
2. Analysis of Inspection Reports From KP3 HSE RR748
3. Johnson. J & Picton. P (1994) Concepts in Artificial Intelligence, The Open University Press
4. Page. SE (2009) Understanding Complexity, The Teaching Company & University of Michigan