A case study paper on equipment availability data analysis.
Tracking bottling equipment line uptime and downtime is a common metric for bottling production lines. The runtime and downtime along with reasons for being down are routinely and semi-automatically recorded. The data is often summarized using the exponential distribution and reported as MTBF and MTTR.
During the design of a new bottling line, the design team used the recorded data from existing lines and equipment to estimate the proposed line availability. If the new line could shorten the run time to accommodate a high mix of products and improve the line availability and thus throughput, the new line would permit significant warehouse savings.
The experienced operator, maintenance and engineering teams knew that the line availability improved as the run duration increased. After the initial setup, the line operator and maintenance crew continued to adjust and improve the operation of the bottling line, thus, overtime improving the line availability. It was not a constant value independent of the run duration. And, the existing calculations based on MTBF and MTTR did not reflect this behavior.
This paper examines the use of expected values of the fitted distributions for uptime and downtime, rather than using MTBF and MTTR. The expected values permit the analysis to study the changes in availability as the run duration changes. The result was the design team’s analysis could tradeoff the run duration and associated throughput with the expected warehouse requirements and cost savings for an optimal bottling line design. This paper primarily explores the equipment analysis and availability calculations.
The fourth industrial revolution Industry 4.0 represents a new paradigm shift from “centralized” to “decentralized” industry relies on cyber-physical based automation where sensors send data directly to the cloud and services such as monitoring, control and optimization automatically subscribe to necessary data in real-time. In the coming years, these technologies will be seen as a viable alternative to current manufacturing processes. According to a recent report by Markets and Markets, smart factory technology will have global market size of 74.80 Billion USD by 2022. The talk provides a comprehensive introduction to Industry 4.0 and Smart Factory. Technical challenges and social implications of smart factory will be discussed. The applicability of these emerging technologies in developing economies is highlighted in this talk as well.
The fourth industrial revolution Industry 4.0 represents a new paradigm shift from “centralized” to “decentralized” industry relies on cyber-physical based automation where sensors send data directly to the cloud and services such as monitoring, control and optimization automatically subscribe to necessary data in real-time. In the coming years, these technologies will be seen as a viable alternative to current manufacturing processes. According to a recent report by Markets and Markets, smart factory technology will have global market size of 74.80 Billion USD by 2022. The talk provides a comprehensive introduction to Industry 4.0 and Smart Factory. Technical challenges and social implications of smart factory will be discussed. The applicability of these emerging technologies in developing economies is highlighted in this talk as well.
Automated Storage/Retrieval System and Automatic Identification and Data Capt...vishaldattKohir1
Automated storage system ,types of material handling systems, AS&R ,ASRS - Automated Storage/Retrieval System ,AS/R system Structures/Components,Automatic Identification and Data Capturing (AIDC)
Agile manufacturing is a term applied to an organization that has created the processes, tools, and training to enable it to respond quickly to customer needs and market changes while still controlling costs and quality.
What is the role of ChatGPT and Generative AI technologies in improving resilience and reliability of utilities. In this presentation, Dr. Sayonsom Chanda, dives deep into the innovative ways in which ChatGPT and Generative AI technologies are being leveraged to revolutionize the utilities sector. Dr. Sayonsom Chanda, an esteemed expert in both AI and utilities infrastructure, explores the challenges faced by modern utilities and showcases how these cutting-edge technologies provide sustainable solutions.
In this detailed presentation, attendees can expect to:
Gain insights into the current landscape of utilities and the pressing need for increased resilience and reliability.
Understand the foundational concepts of ChatGPT and Generative AI, and their potential applications in various industries, with a specific focus on utilities.
Discover real-world case studies where these technologies have been successfully integrated into utilities operations to predict failures, automate customer interactions, and optimize resource allocation.
Learn about the transformative benefits, including enhanced operational efficiency, reduced costs, and improved customer satisfaction.
Engage in a thoughtful discussion on the potential ethical considerations and best practices for implementing such technologies.
Throughout the presentation, Dr. Chanda will weave in his extensive research, firsthand experiences, and vision for the future, ensuring that attendees leave with a comprehensive understanding of the subject and practical takeaways to consider for their own organizations.
Do you know what is Industry40 and what can it bring to the business? Some companies miss out on huge opportunities and stay behind the competition, ignoring technological trends and innovations. Don't stay away, this presentation will show you the opportunities that the 4th industrial revolution brings to business!
If you are ready to know more – check out our article about Industry 4.0! Follow the link - https://bit.ly/2LH3yag
In the present scenario of competitive market, manufacturing companies are struggling to grab the interest and requirements of customers in a well manner by providing sufficient and innovative services in agile manner. Companies those who have adopted agile manufacturing methods have gain a lot of market shares and profit s. It is seen as the winning strategy to be adopted by manufacturers bracing themselves for dramatic performance enhancements to become national and international leaders in an increasingly competitive market of fast changing customer requirements. To further the understanding of agility, this paper reviews the meaning of agility from different management driver’s criteria and attributes and suggests a comprehensive definition which can be adopted as a working definition by practitioners. The main aim of this study is to emphasis on the management drivers of agile manufacturing and make them more efficient by sucking out the drawbacks inside them and focus on some beneficial criteria’s of each management drivers.
Intelligent Manufacturing is a Smart Choice to gain on competitiveness and sustainability. Innovation technologies to boost productivity and visibility of manufacturing opearations.
This article presents the discrete event simulation developed with Northwestern University to optimize an injector production line. This article was presented at the recent Winter simulation Conference (held in December 2008)
Automated Storage/Retrieval System and Automatic Identification and Data Capt...vishaldattKohir1
Automated storage system ,types of material handling systems, AS&R ,ASRS - Automated Storage/Retrieval System ,AS/R system Structures/Components,Automatic Identification and Data Capturing (AIDC)
Agile manufacturing is a term applied to an organization that has created the processes, tools, and training to enable it to respond quickly to customer needs and market changes while still controlling costs and quality.
What is the role of ChatGPT and Generative AI technologies in improving resilience and reliability of utilities. In this presentation, Dr. Sayonsom Chanda, dives deep into the innovative ways in which ChatGPT and Generative AI technologies are being leveraged to revolutionize the utilities sector. Dr. Sayonsom Chanda, an esteemed expert in both AI and utilities infrastructure, explores the challenges faced by modern utilities and showcases how these cutting-edge technologies provide sustainable solutions.
In this detailed presentation, attendees can expect to:
Gain insights into the current landscape of utilities and the pressing need for increased resilience and reliability.
Understand the foundational concepts of ChatGPT and Generative AI, and their potential applications in various industries, with a specific focus on utilities.
Discover real-world case studies where these technologies have been successfully integrated into utilities operations to predict failures, automate customer interactions, and optimize resource allocation.
Learn about the transformative benefits, including enhanced operational efficiency, reduced costs, and improved customer satisfaction.
Engage in a thoughtful discussion on the potential ethical considerations and best practices for implementing such technologies.
Throughout the presentation, Dr. Chanda will weave in his extensive research, firsthand experiences, and vision for the future, ensuring that attendees leave with a comprehensive understanding of the subject and practical takeaways to consider for their own organizations.
Do you know what is Industry40 and what can it bring to the business? Some companies miss out on huge opportunities and stay behind the competition, ignoring technological trends and innovations. Don't stay away, this presentation will show you the opportunities that the 4th industrial revolution brings to business!
If you are ready to know more – check out our article about Industry 4.0! Follow the link - https://bit.ly/2LH3yag
In the present scenario of competitive market, manufacturing companies are struggling to grab the interest and requirements of customers in a well manner by providing sufficient and innovative services in agile manner. Companies those who have adopted agile manufacturing methods have gain a lot of market shares and profit s. It is seen as the winning strategy to be adopted by manufacturers bracing themselves for dramatic performance enhancements to become national and international leaders in an increasingly competitive market of fast changing customer requirements. To further the understanding of agility, this paper reviews the meaning of agility from different management driver’s criteria and attributes and suggests a comprehensive definition which can be adopted as a working definition by practitioners. The main aim of this study is to emphasis on the management drivers of agile manufacturing and make them more efficient by sucking out the drawbacks inside them and focus on some beneficial criteria’s of each management drivers.
Intelligent Manufacturing is a Smart Choice to gain on competitiveness and sustainability. Innovation technologies to boost productivity and visibility of manufacturing opearations.
This article presents the discrete event simulation developed with Northwestern University to optimize an injector production line. This article was presented at the recent Winter simulation Conference (held in December 2008)
Gary Grider from Los Alamos National Laboratory presented this deck at the 2016 OpenFabrics Workshop.
"Trends in computer memory/storage technology are in flux perhaps more so now than in the last two decades. Economic analysis of HPC storage hierarchies has led to new tiers of storage being added to the next fleet of supercomputers including Burst Buffers or In-System Solid State Storage and Campaign Storage. This talk will cover the background that brought us these new storage tiers and postulate what the economic crystal ball looks like for the coming decade. Further it will suggest methods of leveraging HPC workflow studies to inform the continued evolution of the HPC storage hierarchy."
Watch the video presentation: https://www.youtube.com/watch?v=iDYLIpF-6Ew
See more talks from the Open Fabrics Workshop: http://insidehpc.com/2016-open-fabrics-workshop-video-gallery/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC). Instruction cache has major contribution in improving IPC. Cache memories are realized on the same chip where the processor is running. This considerably increases the system cost as well. Hence, it is required to maintain a trade-off between cache sizes and performance improvement offered. Determining the number of cache lines and size of cache line are important parameters for cache designing. The design space for cache is quite large. It is time taking to execute the given application with different cache sizes on an instruction set simulator (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or nearly best IPC. Cache size is derived, at a higher abstraction level, from basic block analysis in the Low Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out-of-order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache line size, number of cache lines).
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption
and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC).
Instruction cache has major contribu
tion in improving IPC. Cache memories are realized on the same chip
where the processor is running. This considerably increases the system cost as well. Hence, it is required to
maintain a trade
-
off between cache sizes and performance improvement offered.
Determining the number
of cache lines and size of cache line are important parameters for cache designing. The design space for
cache is quite large. It is time taking to execute the given application with different cache sizes on an
instruction set simula
tor (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to
identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or
nearly best IPC. Cache size is derived, at a higher abstract
ion level, from basic block analysis in the Low
Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross
validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out
-
of
-
order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or
estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache
line size, number of cache lines).
Put the ‘Auto’ in Autoscaling – Make Kubernetes VPA and HPA work together for...QAware GmbH
Cloud Native Night Mainz & Munich, May 2023, Niels Roetert (Solutions Architect at StormForge).
== Dokument bitte herunterladen, falls unscharf! Please download slides if blurred!
Put the ‘Auto’ in Autoscaling – Make Kubernetes VPA and HPA work together for optimal resource provisioning
Without autoscaling, most companies recognize they’re either wasting a lot of resources or risking performance/reliability issues. There’s no way to effectively set resource requests unless your actual usage is completely flat. A way to solve this is by having knowledgeable people look at it all day to make adjustments, or you can just take the financial hit or the risk of instability. Alternatively one can use technology like machine learning to solve the issue with high accuracy and little to no effort. In this talk we'll inform you and demonstrate to you this last option.
STUDY OF VARIOUS FACTORS AFFECTING PERFORMANCE OF MULTI-CORE PROCESSORSijdpsjournal
Advances in Integrated Circuit processing allow for more microprocessor design options. As Chip Multiprocessor system (CMP) become the predominant topology for leading microprocessors, critical components of the system are now integrated on a single chip. This enables sharing of computation resources that was not previously possible. In addition the virtualization of these computation resources exposes the system to a mix of diverse and competing workloads. On chip Cache memory is a resource of primary concern as it can be dominant in controlling overall throughput. This Paper presents analysis of various parameters affecting the performance of Multi-core Architectures like varying the number of cores, changes L2 cache size, further we have varied directory size from 64 to 2048 entries on a 4 node, 8 node 16 node and 64 node Chip multiprocessor which in turn presents an open area of research on multicore processors with private/shared last level cache as the future trend seems to be towards tiled architecture executing multiple parallel applications with optimized silicon area utilization and excellent performance.
Successful projects documented to share my skills and experience.
Please reach out for agriculture or O&P -related projects involving electrical, mechanical, or design.
Thank you for taking the time to look.
RCM is a process used to identify what Preventive Maintenance or Condition Based Maintenance you need to implement so you get the Reliability you need from your equipment.
Doing Reliability Centered Maintenance (RCM) helps us take care of our equipment. And, taking care of our equipment is very much like taking care of ourselves.
An overview of the basic process to create an ALT using one of 6 different approaches. Slides used for presentation to the ASQ Silicon Valley evening meeting on Nov 15th 2017.
We work on projects to improve reliability. There may not be the field data immediately available. Let’s explore what you can do to improve the overall program while delivering on your project. Specifically, what’s with cost and procurement?
Detailed Information: As a reliability professional we often work with a team focused on improving the reliability of single product or system. We work with the resources and capabilities of the organization. For me a reliability project is one product or line, a program is the entire organization and lifecycle. We bring specific tools and knowledge, yet rely on the overall reliability culture of an organization to be successful
The overall reliability program may or may not have the field data, root cause analysis and other element of information that allow us to effectively solve problems for a specific project. In some cases we have to work to improve the overall program while striving to create a reliable product. Let’s explore what you should do when you are building a reliability model for a new project and would like to use previous reliability history.
If the data is not available what do you do? What are your options? Let’s discuss what happens when the procurement team consistently selects the least expensive and least reliable components. What are your options? You can and should change the way entire departments do business, for the good of the project and the organization. Let’s discuss the scope of your role as a reliability engineer.
This Accendo Reliability webinar originally broadcast on 19 May 2015.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Equipment Availability Analysis
1. Equipment Availability Analysis
Fred Schenkelberg, Ops A La Carte, LLC
Angela Lo, Kaiser Permanente
Key Words: Availability, Data Analysis, Repairable System
SUMMARY & CONCLUSIONS require 2 to 4 hours to reset the bottle alignment guides, chutes
and other equipment and supplies.
Tracking bottling equipment line uptime and downtime is
A scheduling team worked out the production schedule
a common metric for bottling production lines. The runtime
well in advance with the intent to maximize the line uptime by
and downtime along with reasons for being down are routinely
avoiding bottle size changes. Yet, the bottling line design team
and semi-automatically recorded. The data is often
was asked to explore the increase in throughput by increasing
summarized using the exponential distribution and reported as
the availability of the overall line with both engineering and
MTBF and MTTR.
layout changes. For example, one consideration was if
During the design of a new bottling line, the design team
purchasing dedicated equipment for each bottle size increased
used the recorded data from existing lines and equipment to
throughput sufficiently to offset the cost of the additional (and
estimate the proposed line availability. If the new line could
often idle) equipment. Another consideration was the use of
shorten the run time to accommodate a high mix of products
redundant pieces of equipment, especially those prone to
and improve the line availability and thus throughput, the new
extended downtime due to a major repair.
line would permit significant warehouse savings.
While exploring the effectiveness in increasing
The experienced operator, maintenance and engineering
throughput by improving the overall line availability, we also
teams knew that the line availability improved as the run
need to consider the tradeoff between throughput and
duration increased. After the initial setup, the line operator and
inventory costs. For example, in order to increase the line
maintenance crew continued to adjust and improve the
availability and hence the throughput, we should prioritize in
operation of the bottling line, thus, overtime improving the
minimizing bottlenecks during the process. Therefore the
line availability. It was not a constant value independent of the
focus on this paper is on the ‘filler’ equipment as it is the line
run duration. And, the existing calculations based on MTBF
bottleneck. Increasing the throughput of the filler will permit
and MTTR did not reflect this behavior.
the line to produce that same quantity in less time. This frees
This paper examines the use of expected values of the
up the line for other production and reduces quantity of
fitted distributions for uptime and downtime, rather than using
finished goods inventory required.
MTBF and MTTR. The expected values permit the analysis to
The design team had a line throughput modeling software
study the changes in availability as the run duration changes.
package, which included buffer sizing, permitted dwell times
The result was the design team’s analysis could tradeoff the
for the contents at specific temperatures or between bottling
run duration and associated throughput with the expected
and sterilization equipment. They also knew from experience
warehouse requirements and cost savings for an optimal
and simple data analysis that the longer duration runs with a
bottling line design. This paper primarily explores the
single bottle size tended to have better throughput (equipment
equipment analysis and availability calculations.
availability) performance during the later stages of the run.
Anecdotally they knew that the first 12 hours of a run includes
1 INTRODUCTION a significant number of adjustments, which improved the
ability of the line to run smoothly.
The plethora of bottle sizes and flavors even for single
The existing method within the plant to determine
brand of beverage necessitates flexible bottling equipment
equipment availability used MTBF and MTTR and the
capable of ‘change overs’ between flavors and bottle sizes.
underlying assumption of the exponential distribution. The
The equipment for bottling originally primarily only worked
design team recognized the lack of time dependence and
with one bottle size and shape. As market demands increase
therefore asked us to perform the data analysis.
the equipment continued to evolve and now permits the same
bottling line to fill, label and box a relatively large selection of
bottle sizes. A flavor change requires only the cleaning of the 1.1 Project Question
filling equipment and changing the labels, creating the
The basic question explored in this paper is just one of
preference to filling many flavors for one bottle size, when
many analysis performed in support of the design team. One
ever possible. In contrast the bottle size change tended to
question was how to properly model the equipment data such
2. that the design team could explore the differences in 2.1 MTBF
equipment availability over time. For example, with no
The unbiased estimator for the exponential distribution’s
equipment design changes, was it possible to achieve suitable
single fitting parameter, θ, is
throughput with only 4-hour runs rather than 12-hour runs?
(1)
Another was the exploration of the demonstrated throughput
after extended runs suggested what was possible if the
equipment design made ‘change overs’ that did not then
require adjustments to improve it’s performance.
where, θ is called the MTBF by definition within the
This paper will explore one piece of equipment, the filler,
factory. Also the operating time is determined by summing all
and fit appropriate distributions to the data. The fitted
the time segments representing when the filler equipments was
distributions for the uptime (operating) and the downtime
actually filling or ready to fill bottles.
(under repair) permit the calculation of the equipment
The number of downtime events is just the simple count
availability at various run durations.
of events that occurred. And, with the filtered data only counts
1.2 Data events associated with the filler equipment, thus providing the
filler equipment’s average uptime.
The data has been disguised to shield the equipment
As is practice within the factory, the MTBF value is
manufacturer and bottling plant from identification. While the
determined by calculating MTBF over many similar bottle
actual data has a linear transformation, the trends have
size runs. As an example, the data for the ‘small bottles’
remained the same. Furthermore the codes for downtime,
provides an estimate of MTBF of 46.5 minutes.
which included blockage, jams, alignment issues, fill sensor
readings, and many more, have also been altered to represent 2.2 MTTR
generic reasons unrelated to the actual reasons. For the
Using the same formula above with the substitution of
purpose of this discussion the downtime reasons are
downtime for run time and again assuming an exponential
immaterial.
distribution, the factory personal calculate (what they defined)
The actual raw data included downtime for shift change,
the MTTR or average downtime.
meetings, scheduled maintenance, and lack of raw materials.
We removed such data since the purpose of the analysis was to
(2)
focus only on the individual piece of equipment.
Condition Start End Using the same dataset as for MTBF and making the
04:50:18 04:52:23 substitution of downtime for runtime, we find MTTR of 2.45
Supply Tank Low Level Sep/24/2007 Sep/24/2007
05:04:19 05:08:29 minutes.
Capper Infeed Star Jam Sep/24/2007 Sep/24/2007
05:08:42 05:17:28
2.3 Availability
Capper Infeed Star Jam Sep/24/2007 Sep/24/2007
The well known formula for availability
Blocked - Discharge Conveyor 05:51:19 05:51:51
Stopped Sep/24/2007 Sep/24/2007
05:52:28 05:52:58 MTBF
Discharge Jam Alarm At S203 Sep/24/2007 Sep/24/2007 Availiability = (3)
05:52:59 05:54:30 MTBF + MTTR
Discharge Jam Alarm At S203 Sep/24/2007 Sep/24/2007
05:55:34 05:58:31
Jog Mode Selected Sep/24/2007 Sep/24/2007 was given as the reason for estimating the MTBF and
06:00:27 06:00:32 MTTR values by factory personal. Using the values provided
Discharge Jam Alarm At S204 Sep/24/2007 Sep/24/2007
06:33:54 07:17:03
and the availability formula (3) we find the average filler
Filler Run Switch Off Sep/24/2007 Sep/24/2007 availability of 95% over the recent 6 months of operation.
07:47:39 07:53:02
Jog Mode Selected Sep/24/2007 Sep/24/2007 2.4 Throughput
07:56:55 07:58:56
Jog Mode Selected Sep/24/2007 Sep/24/2007 The filler equipment has the capability to fill bottles at the
08:34:11 08:42:50 rate of approximately 425 bottles a minute. And, the
Door 6 Open Sep/24/2007 Sep/24/2007 equipment has the capability to run for short periods of time
much faster. Plus, for restarting (after clearing a bottle jam, for
2 CURRENT MEASURES example) or when troubleshooting, the filler has a run mode
that is much slower. On average the filler is considered to
The following analysis illustrates the plant’s methods for have an average fill rate of 400 bottles per minute.
calculating the equipment availability and throughput. The throughput calculation is:
Throughput = Fill Rate × Availability (4)
3. run with fewer failures. Furthermore, this supports the use of
Thus, for small bottles and this particular filler, the simple constant failure rate estimates for scheduling and the
average throughput is 380 bottles per minute. improved line design decisions.
Finally, in order to schedule the line to produce a desired Taking a closer look at the underlying data, we noticed
amount of filled bottles, the scheduling department would
divide the amount desired by the average throughput. After
applying ‘historical knowledge’ to adjust the run schedule to a
slightly longer duration for short runs and slightly shorter
duration for longer runs when compared to the average run
duration, the scheduling department would publish the factory
schedule.
3 THE DILEMMA
Anecdotally the design team and factory personal know
the longer runs tend to produce more bottles per hour then
short runs. Yet, the values used to calculate equipment and
line availability do not reflect the changing nature of the
equipment operation.
that only a few of the runs lasted more than one or two shifts.
The use of exponentially based distributions and
Some flavors only required a small quantity of bottles filled to
availability calculation does not permit the team to consider
keep up with demand, while only a few commanded a large
different run times and associated inversely proportional
demand. It is the same equipment for short or long run, and
availability values. Knowing the equipments capability when
the design team desired information that quantified the
operated over a long run may suggest to the design team that
changing nature of the failure rates for various lengths of
altering the equipment set-up methods may reduce downtime
planned runs.
sufficiently to permit shorter runs. Or, they may find, that even
with the better equipment availability in the latter parts of long
run may not be sufficient to provide the cost savings 5 GENERAL RENEWAL PROCESS
anticipated, thus suggesting the use of redundant sets of
Advances in the development of the treatment of
equipment to improve line availability.
repairable systems’ data analysis permit the fitting of a
Another troublesome unknown is the rate of change of
parametric model to the factory data. (Mettas and Wenbiao
equipment and line availability. A rapid or slow change would
2005) The data provided by the factory meet the two primary
suggest different strategies to design the improved line. The
assumptions:
same information on the time dependency of availability
1. The time to first failure (TTFF) distribution is known and
would also permit additional accuracy in line scheduling, even
can be estimated from the data. There are over 2000
for the current line configuration.
failure events within the dataset.
The current data analysis methods do not provide
sufficient information related to the changing equipment
The Weibull probability plot shows a beta of
availability. Therefore, the design team decided to employ
approximately 0.6. The fit of the two parameters Weibull was
data analysis that included the time element and the associated
done with the rank regression on X using median ranks. The
changes in equipment availability.
4 GRAPHICAL ANALYSIS
The Mean Cumulative Function (MCF) is a non-
parametric graph of the cumulative failures plotted versus
time. The following plot has 6 months of operations for one
piece of equipment on the production line. There are
approximately 40 different runs (different bottle size/flavor
configurations or ‘setups’).
Overall, from this plot, which appears to be a fairly
straight line, the conclusion is the system is not improving or
degrading over time as the repairs occur. It remains at
approximately the same condition or failure rate over various
length runs. (Trindade and Nathan 2006)
This is in conflict with the common knowledge within the
factory, where over the time of the run, the equipment tends to
4. beta less than one indicate a system that has a decreasing
β
failure rate over time. This suggests that the repairs made − λ ( xi + vi−1 )β − vi−1 (8)
f (t i t i − 1) = λβ (xi + vi −1 )β −1 e
during the earlier part of the run assist in preventing future
failures. For further details on the derivation and fitting algorithms for
this model see (Mettas and Wenbiao 2005).
2. The repair time is negligible relative to the run time. Most
repairs occur within 1 minute of failure occurrence and
compared to the average runtime of approximately 45
minutes is negligible.
The fit of the repair times was done within Weibull++
using rank regression on X and median ranks to fit the
lognormal distribution. The plot shows that approximately
50% of the repairs are accomplished within one minute and
approximately 90% are accomplished within 10 minutes.
While a larger difference between runtime and repair time
would be desirable, the single order of magnitude difference is
sufficient for this analysis.
The general renewal process model uses a concept of
virtual age. Let t1, t2, …,tn represent the successive failure time.
And, let x1, x2, …,xn represent the time between failures where
ti = ∑ j −1 x j
i
(5)
For the Type II model of the General Renewal Process the
virtual age is determine with equation 6.
vi = q(vi −1 + qxi ) = q i x1 + q i − 1x2 + L + xi (6)
where vi is the virtual age of the system right after the ith
repair. Depending on the value of q the model permits the
partial improvement of the system by adjusting the apparent
system age.
The power law function models the rate of recurrent
failures within the system, which is
λ (t) = λβ t β −1 (7)
and, the conditional pdfis
5. 5.1 Analysis
Minutes 120 240 480 960 1440
Within the Weibull++ software algorithms for modeling
recurrent event data, there are two models available. The Type Cumulative
I model assumes the repair only addresses the immediate 0.1395 0.1077 0.0865 0.0754 0.0706
Failure Intensity
failure. Whereas, the Type II model assumes the repair
partially of completely repairs or possibly improves the Instantaneous
system, not just fixing the immediate fault. Given the nature of 0.0482 0.0377 0.0307 0.0267 0.0288
Failure Intensity
fixes on the production line that often include equipment
adjustments (alignment, timing, etc.) we use the Type II model Considering the MTBF is the inverse of the failure intensity,
for this analysis. we can calculate the MTBF values for specific durations or
Weibull++ using the General Renewal Process, type II, three- instants.
parameter model, accomplishes the fit. The results are
Minutes 120 240 480 960 1440
Beta = 0.27
Lambda = 2.09 Cumulative
7.17 9.29 11.56 13.26 14.16
MTBF
q = 0.38
The third parameter, q, may be considered an index for repair Instantaneous
20.75 26.53 32.57 37.45 34.72
effectiveness. Where q=0 represents a perfect repair, ‘as good MTBF
as new’ state. And, where q=1 represents a minimal repair,
permitting the use of non-homogenous Poison process analysis The MTBF values above along with the MTTR value of 2.45
(MTBF) or the system is considered in an ‘as bad of old’ state. minutes determined as the expected value of the fitted
This model permits the repair to only partial make they system lognormal distribution, we can use the availability formula (3)
better, 0<q<1 or an imperfect repair. The q=0.38 indicates that above to determine the expected availability values for select
in general the repairs make a slight improvement. durations or instants.
5.2 Discussion
Minutes 120 240 480 960 1440
The plot of cumulative failure intensity vs. time shows the
rapid improvement in equipment performance after the early Cumulative
failures receive attention. Note the jog upward in the data at 0.75 0.79 0.83 0.84 0.85
Availability
approximately 500 minutes, where two plant behaviors
contribute to cause this data. First, a significant number of Instantaneous
runs are scheduled to occur over one shift, which is 480 0.89 0.92 0.93 0.94 0.93
Availability
minutes long. Second, the shift change incurs a change of
personal and during the shift briefing time, the line is
Finally, using the equation to determine the expected
administratively shut down. The restart incurs additional
throughput, equation (4), we can determine the expected
failures and adjustments.
production for various durations of runs. The instantaneous
After approximately two shifts or 1000 minutes of running, throughput provides information on the improving nature of
the equipment tends to run smoothly and repairs do not the system over time.
improve or degrade the equipment performance.
Minutes 120 240 480 960 1440
5.3 Model Use
The GRP model permits us to determine the cumulative, Cumulative
283 301 314 321 324
instantaneous and conditional failure intensities at a given Throughput
time and duration of our choosing. This addresses the desire to
determine the equipment availability and throughput for Instantaneous
340 348 353 357 355
specific run durations. Throughput
Using the quick calculation pad within Weibull++ for the
fitted data we can calculate the for the cumulative and
instantaneous failure intensities at select duration or times,
respectively. The following table summaries the failure 6 ANALYSIS.
intensity calculations:
With the improvement in calculating the changing nature
of the filler’s MTBF, we are not able to determine the
potential impact on final goods inventory reduction. The
6. comparison of the current short run performance to the
potential performance provides a basis for the potential
This suggests a 20% reduction in time to produce the same
inventory reduction.
amount of finished goods for a four-hour duration run. Of
6.1 Inventory vs. Throughput course, this is only possible if the equipment improvements
permit the filler to have the same average throughput over a 4
When analyzing the opportunity of increasing throughput
hours run as the long run average throughput of 380 bottles
by improving the line availability, we are able to determine the
per minute. The reduced runtime values permit the reduction
potential inventory savings using an application of Little’s
in finished goods, as the increased capacity of the factory
Law.
permit the factory to replenish the inventory more often.
Finished Goods Inventory = The cost savings in inventory provides a basis for the
engineering improvement project. If the engineering team
Throughput x Flow Time (9) expects to make improvements to achieve four-hour runs with
a 380 bottles/minute throughput, they may achieve at least a
The above Little’s Law (Silver, E. et.al. 1998) can be 20% reduction in inventory. Assume the cost to carry the
applied to evaluate the tradeoff between the throughput and inventory for a year is $20 million within this site. This
the inventory cost. It is clear that increasing throughput while suggests the engineering team can spend $5 million for
holding flow time constant will take less runtime to build the improvements and achieve a one-year payback on the
same amount of finished goods. investment.
7 CONCLUSION
The results show the lack of accuracy of the existing method
Length of run
120 240 480 960 1440
(minutes)
Time to build
3.53 3.33 3.19 3.12 3.09
1000 units
%Improvement
25.5 20.9 17.5 15.6 14.7
with 380/min
when evaluating equipment availability using traditionally
calculated MTBF and MTTR. The traditional method has
only one, non-time dependant estimate for MTBF. In order to
provide a better overall analysis of equipment availability and
throughput, include within the analysis a time dependence
variable such as run durations. The GPP model permits such
an analysis.
As seen in the calculations using the GPP model, it takes
approximately 4 hours (240 minutes) of runtime to stabilize
the instantaneous availability and throughput. Engineering
changes to the equipment to either accelerate or improve the
initial performance effectively eliminating the first four hours
of adjustments will permit the line to run more efficiently with
short runs. Simply implementing shorter runs will not achieve
the goal without fundamental changes to the production
equipment.
Running more effectively permits the reduction of final goods
inventory by as much as 20% for a 4 hour run. Further
inventory reduction is also possibly due to the additional
capacity of the factory and is not consider in this analysis. The
cost savings associated with the inventory reduction provides
a boundary for the improvement costs.
8 REFERENCES
7. 1. Mettas, A. and Z. Wenbiao (2005). Modeling and currently the Chair of the American Society of Quality
analysis of repairable systems with general repair. Reliability Division, active at the local level with the Society
Reliability and Maintainability Symposium, 2005. of Reliability Engineers and IEEE’s Reliability Society, IEEE
Proceedings, Annual. reliability standards development teams and recently joined
2. Trindade, D. and S. Nathan (2006). Simple plots for the US delegation as a voting member of the IEC TAG 56 -
monitoring the field reliability of repairable systems. Durability. He is a Senior Member of ASQ and IEEE. He is
Reliability and Maintainability Symposium, 2006. an ASQ Certified Quality and Reliability Engineer.
Proceedings, Annual.
3. Silver, E., D. Pyke, and R. Peterson (1998). Inventory Angela Lo
Management and Production Planning and Scheduling, 7313 Shelter Creek Lane
3rd Ed. Wiley, New York, 1998. San Bruno, CA 94066, USA
e-mail: angelalo928@gmail.com
9 BIOGRAPHIES
Fred Schenkelberg Angela Lo is Senior Financial Analyst at Kaiser
Ops A La Carte, LLC Permanente – South San Francisco Medical Office. In her
990 Richard Avenue, Suite 101 current job role, she provides operational analysis and process
Santa Clara, CA 95050, USA improvement recommendations to front office operations.
e-mail: fms@opsalacarte.com Prior to this position, she worked for a few domestic and
international companies with focus areas in supply chain
Fred Schenkelberg is a reliability engineering and management, operations improvement, and six sigma
management consultant with Ops A La Carte, with areas of initiatives. Her knowledge in process improvement was not
focus including reliability engineering management training only utilized in manufacturing operations but also in service
and accelerated life testing. Previously, he co-founded and environment. She earned her bachelor’s degree in Industrial
built the HP corporate reliability program, including Engineering and Operations Research at University of
consulting on a broad range of HP products. He is a lecturer California, Berkeley in 2005 and her master’s degree in
with the University of Maryland teaching a graduate level Industrial and Systems Engineering at San Jose State
course on reliability engineering management. He earned a University in 2007. She also obtained her Six Sigma Black
Master of Science degree in statistics at Stanford University in Belt Certification through American Society for Quality in
1996. He earned his bachelors degrees in Physics at the 2009. Angela is currently an active member in American
United State Military Academy in 1983. Fredis an active Society for Quality.
volunteer with the management committee of RAMS,
8. 1. Mettas, A. and Z. Wenbiao (2005). Modeling and currently the Chair of the American Society of Quality
analysis of repairable systems with general repair. Reliability Division, active at the local level with the Society
Reliability and Maintainability Symposium, 2005. of Reliability Engineers and IEEE’s Reliability Society, IEEE
Proceedings, Annual. reliability standards development teams and recently joined
2. Trindade, D. and S. Nathan (2006). Simple plots for the US delegation as a voting member of the IEC TAG 56 -
monitoring the field reliability of repairable systems. Durability. He is a Senior Member of ASQ and IEEE. He is
Reliability and Maintainability Symposium, 2006. an ASQ Certified Quality and Reliability Engineer.
Proceedings, Annual.
3. Silver, E., D. Pyke, and R. Peterson (1998). Inventory Angela Lo
Management and Production Planning and Scheduling, 7313 Shelter Creek Lane
3rd Ed. Wiley, New York, 1998. San Bruno, CA 94066, USA
e-mail: angelalo928@gmail.com
9 BIOGRAPHIES
Fred Schenkelberg Angela Lo is Senior Financial Analyst at Kaiser
Ops A La Carte, LLC Permanente – South San Francisco Medical Office. In her
990 Richard Avenue, Suite 101 current job role, she provides operational analysis and process
Santa Clara, CA 95050, USA improvement recommendations to front office operations.
e-mail: fms@opsalacarte.com Prior to this position, she worked for a few domestic and
international companies with focus areas in supply chain
Fred Schenkelberg is a reliability engineering and management, operations improvement, and six sigma
management consultant with Ops A La Carte, with areas of initiatives. Her knowledge in process improvement was not
focus including reliability engineering management training only utilized in manufacturing operations but also in service
and accelerated life testing. Previously, he co-founded and environment. She earned her bachelor’s degree in Industrial
built the HP corporate reliability program, including Engineering and Operations Research at University of
consulting on a broad range of HP products. He is a lecturer California, Berkeley in 2005 and her master’s degree in
with the University of Maryland teaching a graduate level Industrial and Systems Engineering at San Jose State
course on reliability engineering management. He earned a University in 2007. She also obtained her Six Sigma Black
Master of Science degree in statistics at Stanford University in Belt Certification through American Society for Quality in
1996. He earned his bachelors degrees in Physics at the 2009. Angela is currently an active member in American
United State Military Academy in 1983. Fredis an active Society for Quality.
volunteer with the management committee of RAMS,