This document discusses the concept of supply chains as filters and proposes an explicit filter methodology for supply chain design. It begins by introducing the explicit filter approach and comparing it to previous implicit filter approaches. It then discusses key concepts like the ideal filter and how supply chains can be designed to balance capacity requirements and stock fluctuations. The document also discusses how the explicit filter approach can simplify decision support system selection and aggregate planning. It provides historical context on previous operations research approaches and revisits the classic HMMS algorithm from a frequency domain perspective.
This paper provides an introduction to data envelopment analysis (DEA) and important extensions that have improved its effectiveness as a productivity analysis tool. DEA is a multi-factor productivity model that measures the relative efficiencies of decision making units with multiple inputs and outputs. Extensions discussed include benchmarking, performance ranking, weight restrictions, and analyzing efficiency changes over time. The paper concludes that DEA is a useful tool for evaluating performance in manufacturing and services, though the models can be sensitive and require consideration of sample size and factor selection.
The document discusses how storytelling can be used in health apps to promote behavior change. It provides examples of apps that have used storytelling effectively, such as Talkspace which uses framing, Carrot Fit which uses memorable stories, and Zombies Run! which uses engaging stories. The document also discusses different ways stories can be integrated into apps, such as telling traditional stories, letting data tell stories, or enabling users to tell their own stories. Overall, it argues that storytelling is an effective technique because it is how humans naturally make sense of their experiences.
El resumen describe una lección para estudiantes de primer grado sobre el uso de números ordinales. La maestra generó una situación didáctica usando ábacos de colores para que los estudiantes compararan la cantidad de anillos en dos montones. Luego dividió a los estudiantes en equipos y les dio ábacos para contar anillos, registrando los resultados en una tabla. Finalmente, compararon resultados entre equipos y reforzaron la lección usando una tabla digital.
BizWare Int'l is a software business solutions provider founded in 2006 that provides solutions in Egypt, the Middle East, and worldwide. It offers business analytics, decision support systems, budget and planning solutions, and text mining solutions for Arabic. Some of its clients include Ibn Sina Pharma, AstraZeneca Egypt, Misr Insurance, and Hero MEA. BizWare Int'l has locations in Cairo, Egypt, Ajman, UAE, and Toronto, Canada.
This paper provides an introduction to data envelopment analysis (DEA) and important extensions that have improved its effectiveness as a productivity analysis tool. DEA is a multi-factor productivity model that measures the relative efficiencies of decision making units with multiple inputs and outputs. Extensions discussed include benchmarking, performance ranking, weight restrictions, and analyzing efficiency changes over time. The paper concludes that DEA is a useful tool for evaluating performance in manufacturing and services, though the models can be sensitive and require consideration of sample size and factor selection.
The document discusses how storytelling can be used in health apps to promote behavior change. It provides examples of apps that have used storytelling effectively, such as Talkspace which uses framing, Carrot Fit which uses memorable stories, and Zombies Run! which uses engaging stories. The document also discusses different ways stories can be integrated into apps, such as telling traditional stories, letting data tell stories, or enabling users to tell their own stories. Overall, it argues that storytelling is an effective technique because it is how humans naturally make sense of their experiences.
El resumen describe una lección para estudiantes de primer grado sobre el uso de números ordinales. La maestra generó una situación didáctica usando ábacos de colores para que los estudiantes compararan la cantidad de anillos en dos montones. Luego dividió a los estudiantes en equipos y les dio ábacos para contar anillos, registrando los resultados en una tabla. Finalmente, compararon resultados entre equipos y reforzaron la lección usando una tabla digital.
BizWare Int'l is a software business solutions provider founded in 2006 that provides solutions in Egypt, the Middle East, and worldwide. It offers business analytics, decision support systems, budget and planning solutions, and text mining solutions for Arabic. Some of its clients include Ibn Sina Pharma, AstraZeneca Egypt, Misr Insurance, and Hero MEA. BizWare Int'l has locations in Cairo, Egypt, Ajman, UAE, and Toronto, Canada.
1) The document describes a study that uses a causal loop diagram (CLD) to model the impact of electronic data interchange (EDI) implementation on an operations system. The CLD identifies several feedback loops involving factors like error rate, work pressure, costs, and profit.
2) Implementing EDI reduces paperwork and the potential for errors, but increasing EDI use also raises IT costs. Higher error rates can increase costs and lower customer satisfaction and profit. This can increase work pressure and further raise error rates.
3) The CLD captures these complex relationships and feedback effects to provide insights into how changes in one part of the system, like implementing EDI, can reverberate through the entire operations
Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule ApproachAlkis Vazacopoulos
Highlighted in this article is a closed-shop scheduling heuristic which makes use of the traditional priority dispatch rule approach found in open-shop scheduling such as job-shop scheduling. Instead of prioritizing and scheduling one job or project (or stock-order) at a time, we schedule one stock or stock-group at a time where a stock-group is a collection of individual stocks and their one or more stock-orders. These stocks can be feed-stocks, intermediate-stocks or product-stocks of which we focus on product-stocks given that most production is demand-driven. A key feature of this heuristic is our ability to compress the production network or superstructure so that only those unit-operations necessary to produce the stocks in question are included in the model thus reducing the size of the problem considerably at each iteration of the heuristic. The stock-specific network compression technique uses what we call a unit-capacity transshipment linear program to successively determine which unit-operations are redundant when making a particular stock. This heuristic is also particularly useful for those process industries that can potentially produce many product-stocks but only a fraction of these are produced within the scheduling horizon whereby the model is significantly reduced at solve time to include only those stocks that are demanded whereby redundant unit-operations are removed. An illustrative example is provided with recycle loops (i.e., stock flow-reversals) and shared units or equipment (i.e., unit flow-reversals) that demonstrates the effectiveness and efficiency of the technique.
This document analyzes the functions approach to studying innovation systems, using the California wind energy innovation system (CAWEIS) as a case study. It presents a theoretical framework that maps the components, structure, and functions of an innovation system over time. The framework allows comparison of different systems and insight into how system structure relates to performance. The document applies this framework to analyze CAWEIS over 30 years, dividing it into 5 periods based on key events. It identifies the system's components, maps how the structure changed over time, and analyzes how 7 key functions of the system, like entrepreneurship and knowledge development, were fulfilled in each period.
This paper discusses Motorola's challenge of dynamically reassigning channels (frequencies) for cell phone towers without disrupting network operations. The problem can be modeled as a graph coloring problem, where towers are nodes that must be assigned different colors (channels) from neighboring nodes. The paper proposes an algorithm for dynamic reallocation based on direct search and random coloring to solve Motorola's problem of changing all towers to new channels one by one while maintaining interference constraints. It evaluates how dynamic channel assignment can address problems of frequent network changes in mobile phone providers.
This document presents an approach for improving maintenance policies for multi-state systems. It first formalizes the transition process of a multi-state system using dynamic Bayesian networks. It then exhibits a cost function for preventive maintenance and an optimization method using reinforcement learning to identify the best combination of transition rates and preventive maintenance policy. The dynamic Bayesian network approach models the probability distributions of the system's state over time and allows for more compact representation compared to Markov chains. The reinforcement learning optimization seeks to minimize cost and maximize availability by learning the optimal preventive maintenance levels over the system's lifetime.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Chan supply chain coordination literature reviewFred Kautz
This document reviews over 100 research papers on coordination studies in the context of supply chain dynamics from the last decade. The papers are categorized into two broad approaches: analytical approaches and simulation approaches. Analytical approaches use modified deterministic models to incorporate uncertainties via scenarios. Simulation approaches use simulations to model supply chain dynamics since traditional analytical models cannot capture system behaviors. The review aims to understand coordination strategies under different methodologies and identify insights for future research.
USE OF ADAPTIVE COLOURED PETRI NETWORK IN SUPPORT OF DECISIONMAKINGcsandit
This work presents the use of Adaptive Coloured Petri Net (ACPN) in support of decision
making. ACPN is an extension of the Coloured Petri Net (CPN) that allows you to change the
network topology. Usually, experts in a particular field can establish a set of rules for the
proper functioning of a business or even a manufacturing process. On the other hand, it is
possible that the same specialist has difficulty in incorporating this set of rules into a CPN that
describes and follows the operation of the enterprise and, at the same time, adheres to the rules
of good performance. To incorporate the rules of the expert into a CPN, the set of rules from the
IF - THEN format to the extended adaptive decision table format is transformed into a set of
rules that are dynamically incorporated to APN. The contribution of this paper is the use of
ACPN to establish a method that allows the use of proven procedures in one area of knowledge
(decision tables) in another area of knowledge (Petri nets and Workflows), making possible the
adaptation of techniques and paving the way for new kind of analysis.
This document discusses flexibility options for electricity systems with increasing levels of variable renewable energy. It defines flexibility as the ability to maintain power system balance despite fluctuations in supply and demand. Variable renewables like wind and solar increase flexibility needs by adding variability and uncertainty to supply, while also reducing existing flexible capacity as they displace traditional generation. A wide range of flexibility options are assessed, including supply, demand, storage, network and system options. Different options are best suited to different timeframes, from short-term balancing to long-term planning. Barriers to flexibility deployment include market design challenges around incentivizing flexibility investments.
This document summarizes a study that developed a Holonic Workforce Allocation Model (HWM) to reduce the impact of absenteeism and turnover in job shop environments. HWM is based on the Holonic Manufacturing System (HMS) paradigm and uses a weighted random formulation to allocate workers to tasks. The formulation considers factors like worker skills, task urgency, and cross-training opportunities. Computer simulations tested HWM against other models and found it more effectively minimized late tasks, improved average skills, and provided balanced workloads and cross-training while maintaining productivity. The study was motivated by the importance of workforce issues to HMS and the need to address absenteeism and turnover that can derail production plans.
Path-loss prediction of GSM signals in warrionome okuma
This document provides background information on wireless communication and propagation models. It discusses how wireless communication has evolved from early forms using drums and smoke signals to modern cellular networks. The cellular principle is described, where coverage areas are divided into cells served by low-power transmitters to improve capacity and frequency reuse. Frequency reuse allows the same frequencies to be used in cells spaced apart without interference. The document aims to compare the accuracy of free-space and HATA propagation models in urban, suburban, and rural areas of Warri, Nigeria.
The document summarizes the Technology Acceptance Model (TAM) and Technology-Organization-Environment (TOE) framework for understanding technology adoption. TAM models how users come to accept new technologies based on perceived usefulness and ease of use. It is built on expectancy-value theory and the theory of reasoned action. TOE framework examines how three contexts - technological, organizational, and environmental - influence organizations' decisions to adopt innovations. The document provides details on the key constructs of each model and examples of how they have been applied to study different technologies and industries.
1 MODULE 1 INTRODUCTION TO SIMULATION Module out.docxjeremylockett77
1
MODULE 1: INTRODUCTION TO SIMULATION
Module outline:
• What is Simulation?
• Simulation Terminology
• Components of a System
• Models in Simulation
• Typical applications
• References
WHAT IS SIMULATION?
simulation may be defined as a technique that imitates the operation of a real world
system or processes as it evolves over time. It involves the generation of an artificial
history of the system and observation of that artificial history to obtain information and
draw inferences about the operating characteristics of the real system. Simulation
educates us on how a system operates and how the system might respond to changes. It
enables us to test alternative courses of action to determine their impact on system
performance. Before an alternative is implemented, it must be tested. Although
performing tests with the “real thing” would be ideal. This is seldom practically feasible.
The cost associated with changing/improving a system may be very high both in the
term of capital required to implement the change and losses due to interruption in
production operations and other losses. In most cases experimentation with the
proposed alternative is practically impossible. In addition, as the cost of proposed
changes (alternative solutions) increase, so does the cost of physically experimenting.
As an example, suppose a heavy-duty conveyor is being considered as an alternative to
the existing material handling method (by trucks) for improving productivity and
speeding up the production operations in a factory (seeFigre3). It is obvious that
installing the proposed conveyor on a test basis would probably not be cost effective.
Therefore, experimentation with alternative configurations would be practically
impossible. In stead, experimentation with a representative model of the system would
probably make more sense.
Simulation is a means of experimenting with a detailed model of a real system to
Determine how the system will respond to changes in its environment, structure, and its
underlying assumption [Harrel (1996)]. Management Scientist uses a wide variety of
analytical tools to model, analyze, and solve complex decision problems. These tool
include linear programming, decision analysis, forecasting, Queuing theory and
Alternative 1: Use lift-truck
2
Point A Point B
(Warehouse) (Factory)
Alternative 2: use a conveyor
Point A
(warehouse ) . . . . . . . . Point B
...
An integrated inventory optimisation model for facility location allocation p...Ramkrishna Manatkar
This document presents a mathematical model for an integrated inventory optimization problem for a multi-echelon supply chain network. The model considers inventory, transportation and location decisions with the objectives of minimizing total inventory holding and transportation costs while meeting customer service level requirements. The model is formulated as a multi-objective non-linear integer programming problem to determine optimal assignments of retailers to distribution centers, safety stock levels at each facility, regular stock levels, and maximum inventory levels at each echelon. The model is tested on real data from steel industry supply chains to provide practical guidelines for inventory management and distribution network design.
An Activity Based Costing-Based A Case Study Of A Taiwanese Gudeng Precisio...Kelly Lipiec
This document discusses implementing an activity-based costing (ABC) system at a Taiwanese semiconductor process equipment factory (SPEF) to more accurately estimate product costs. Currently, the SPEF uses a traditional costing system that does not precisely reflect production costs. The ABC system would assign costs to products based on the activities and resources required to produce them. An analysis of the SPEF's current cost structure is provided. The document proposes developing an ABC model for the SPEF and comparing the results to the traditional system to analyze benefits and obstacles of the ABC approach.
IJPR (2015) A Distance-based Methodology for Increased Extraction Of Informat...Nicky Campbell-Allen
This document describes a new methodology for incorporating information from the roof matrices in Quality Function Deployment (QFD) studies. The roof matrices contain correlations between customer requirements (voice of customers) and technical characteristics, but existing methods for including this information in QFD analyses have limitations. The proposed new methodology uses the Manhattan Distance Measure to integrate roof matrix correlation data into the final weightings of technical characteristics. This provides a more consistent way to select technical characteristics by identifying those that are negatively or positively correlated. The methodology is demonstrated using a published QFD case study.
The document summarizes key concepts related to failure and repair rates in manufacturing industries. It defines reliability as the probability a system will perform as intended without failure for a given period of time. Availability accounts for both reliability and how quickly a system can be repaired. It also defines failure rate, repair rate, and different types of availability like point availability and mean availability. Maintainability is defined as how easily and quickly a system can be restored after failure.
Investigating and Classifying the Applications of Flexible Manufacturing Syst...IOSR Journals
The recent manufacturing environment is characterized as having diverse products due to mass
customization, short production lead-time, and unstable customer demand. Today, the need for flexibility, quick
responsiveness, and robustness to system uncertainties in production scheduling decisions has increased
significantly. In traditional job shops, tooling is usually assumed as a fixed resource. However, when tooling
resource is shared among different machines, a greater product variety, routing flexibility with a smaller tool
inventory can be realized. Such a strategy is usually enabled by an automatic tool changing mechanism and tool
delivery system to reduce the time for tooling setup, hence allows parts to be processed in small batches. In this
research, a dynamic scheduling problem under flexible tooling resource constraints is studied. An integrated
approach is proposed to allow two levels of hierarchical, dynamic decision making for job scheduling and tool
flow control in Automated Manufacturing Systems. It decomposes the overall problem into a series of static subproblems
for each scheduling window, handles random disruptions by updating job ready time, completion
time, and machine status on a rolling horizon basis, and considers the machine availability explicitly in
generating schedules. Two types of manufacturing system models are used in simulation studies to test the
effectiveness of the proposed dynamic scheduling approach. First, hypothetical models are generated using
some generic shop flow structures (e.g. flexible flow shops, job shops, and single-stage systems) and
configurations(Insup,Um.,et al.,2009).They are tested to provide the empirical evidence about how well the
proposed approach performs for the general automated manufacturing systems where parts have alternative
routings. Second, a model based on a real industrial flexible manufacturing system was used to test the
effectiveness of the proposed approach when machine types, part routing, tooling, and other production
parameters closely mimic to the real flexible manufacturing operations.
This document summarizes a research paper about optimally managing lead times in supply chains. The paper studies a serial inventory system where units can be shipped from any stage to any downstream stage, providing flexibility. This is compared to traditional systems where units must progress sequentially through stages. The paper shows that optimal policies have a threshold-based structure called "extended echelon base stock policies." These involve a threshold for shipping from each origin stage to each destination stage. The thresholds can be computed by decomposing the problem into single-unit subproblems. Overall, the paper characterizes optimal policies for dynamically adjusting lead times in multi-echelon supply chains.
An Updated Analysis of Optimal Carbon PricingJeff Mollins
This document summarizes an analysis that updates an earlier model of optimal carbon pricing using more recent data. The analysis uses an intertemporal model that considers both flow and stock externalities of carbon emissions. It extends the previous model by including the effects of carbon sinks and more realistic assumptions about emission flow externalities. The results indicate that optimal carbon taxes have increased significantly since the earlier analysis and that higher levels of emissions abatement are needed to decrease net emissions, likely due to a lower long-term price elasticity of demand for fossil fuels. Sensitivity analysis shows variation in optimal policy and delay times based on the cost of abatement.
1) The document describes a study that uses a causal loop diagram (CLD) to model the impact of electronic data interchange (EDI) implementation on an operations system. The CLD identifies several feedback loops involving factors like error rate, work pressure, costs, and profit.
2) Implementing EDI reduces paperwork and the potential for errors, but increasing EDI use also raises IT costs. Higher error rates can increase costs and lower customer satisfaction and profit. This can increase work pressure and further raise error rates.
3) The CLD captures these complex relationships and feedback effects to provide insights into how changes in one part of the system, like implementing EDI, can reverberate through the entire operations
Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule ApproachAlkis Vazacopoulos
Highlighted in this article is a closed-shop scheduling heuristic which makes use of the traditional priority dispatch rule approach found in open-shop scheduling such as job-shop scheduling. Instead of prioritizing and scheduling one job or project (or stock-order) at a time, we schedule one stock or stock-group at a time where a stock-group is a collection of individual stocks and their one or more stock-orders. These stocks can be feed-stocks, intermediate-stocks or product-stocks of which we focus on product-stocks given that most production is demand-driven. A key feature of this heuristic is our ability to compress the production network or superstructure so that only those unit-operations necessary to produce the stocks in question are included in the model thus reducing the size of the problem considerably at each iteration of the heuristic. The stock-specific network compression technique uses what we call a unit-capacity transshipment linear program to successively determine which unit-operations are redundant when making a particular stock. This heuristic is also particularly useful for those process industries that can potentially produce many product-stocks but only a fraction of these are produced within the scheduling horizon whereby the model is significantly reduced at solve time to include only those stocks that are demanded whereby redundant unit-operations are removed. An illustrative example is provided with recycle loops (i.e., stock flow-reversals) and shared units or equipment (i.e., unit flow-reversals) that demonstrates the effectiveness and efficiency of the technique.
This document analyzes the functions approach to studying innovation systems, using the California wind energy innovation system (CAWEIS) as a case study. It presents a theoretical framework that maps the components, structure, and functions of an innovation system over time. The framework allows comparison of different systems and insight into how system structure relates to performance. The document applies this framework to analyze CAWEIS over 30 years, dividing it into 5 periods based on key events. It identifies the system's components, maps how the structure changed over time, and analyzes how 7 key functions of the system, like entrepreneurship and knowledge development, were fulfilled in each period.
This paper discusses Motorola's challenge of dynamically reassigning channels (frequencies) for cell phone towers without disrupting network operations. The problem can be modeled as a graph coloring problem, where towers are nodes that must be assigned different colors (channels) from neighboring nodes. The paper proposes an algorithm for dynamic reallocation based on direct search and random coloring to solve Motorola's problem of changing all towers to new channels one by one while maintaining interference constraints. It evaluates how dynamic channel assignment can address problems of frequent network changes in mobile phone providers.
This document presents an approach for improving maintenance policies for multi-state systems. It first formalizes the transition process of a multi-state system using dynamic Bayesian networks. It then exhibits a cost function for preventive maintenance and an optimization method using reinforcement learning to identify the best combination of transition rates and preventive maintenance policy. The dynamic Bayesian network approach models the probability distributions of the system's state over time and allows for more compact representation compared to Markov chains. The reinforcement learning optimization seeks to minimize cost and maximize availability by learning the optimal preventive maintenance levels over the system's lifetime.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Chan supply chain coordination literature reviewFred Kautz
This document reviews over 100 research papers on coordination studies in the context of supply chain dynamics from the last decade. The papers are categorized into two broad approaches: analytical approaches and simulation approaches. Analytical approaches use modified deterministic models to incorporate uncertainties via scenarios. Simulation approaches use simulations to model supply chain dynamics since traditional analytical models cannot capture system behaviors. The review aims to understand coordination strategies under different methodologies and identify insights for future research.
USE OF ADAPTIVE COLOURED PETRI NETWORK IN SUPPORT OF DECISIONMAKINGcsandit
This work presents the use of Adaptive Coloured Petri Net (ACPN) in support of decision
making. ACPN is an extension of the Coloured Petri Net (CPN) that allows you to change the
network topology. Usually, experts in a particular field can establish a set of rules for the
proper functioning of a business or even a manufacturing process. On the other hand, it is
possible that the same specialist has difficulty in incorporating this set of rules into a CPN that
describes and follows the operation of the enterprise and, at the same time, adheres to the rules
of good performance. To incorporate the rules of the expert into a CPN, the set of rules from the
IF - THEN format to the extended adaptive decision table format is transformed into a set of
rules that are dynamically incorporated to APN. The contribution of this paper is the use of
ACPN to establish a method that allows the use of proven procedures in one area of knowledge
(decision tables) in another area of knowledge (Petri nets and Workflows), making possible the
adaptation of techniques and paving the way for new kind of analysis.
This document discusses flexibility options for electricity systems with increasing levels of variable renewable energy. It defines flexibility as the ability to maintain power system balance despite fluctuations in supply and demand. Variable renewables like wind and solar increase flexibility needs by adding variability and uncertainty to supply, while also reducing existing flexible capacity as they displace traditional generation. A wide range of flexibility options are assessed, including supply, demand, storage, network and system options. Different options are best suited to different timeframes, from short-term balancing to long-term planning. Barriers to flexibility deployment include market design challenges around incentivizing flexibility investments.
This document summarizes a study that developed a Holonic Workforce Allocation Model (HWM) to reduce the impact of absenteeism and turnover in job shop environments. HWM is based on the Holonic Manufacturing System (HMS) paradigm and uses a weighted random formulation to allocate workers to tasks. The formulation considers factors like worker skills, task urgency, and cross-training opportunities. Computer simulations tested HWM against other models and found it more effectively minimized late tasks, improved average skills, and provided balanced workloads and cross-training while maintaining productivity. The study was motivated by the importance of workforce issues to HMS and the need to address absenteeism and turnover that can derail production plans.
Path-loss prediction of GSM signals in warrionome okuma
This document provides background information on wireless communication and propagation models. It discusses how wireless communication has evolved from early forms using drums and smoke signals to modern cellular networks. The cellular principle is described, where coverage areas are divided into cells served by low-power transmitters to improve capacity and frequency reuse. Frequency reuse allows the same frequencies to be used in cells spaced apart without interference. The document aims to compare the accuracy of free-space and HATA propagation models in urban, suburban, and rural areas of Warri, Nigeria.
The document summarizes the Technology Acceptance Model (TAM) and Technology-Organization-Environment (TOE) framework for understanding technology adoption. TAM models how users come to accept new technologies based on perceived usefulness and ease of use. It is built on expectancy-value theory and the theory of reasoned action. TOE framework examines how three contexts - technological, organizational, and environmental - influence organizations' decisions to adopt innovations. The document provides details on the key constructs of each model and examples of how they have been applied to study different technologies and industries.
1 MODULE 1 INTRODUCTION TO SIMULATION Module out.docxjeremylockett77
1
MODULE 1: INTRODUCTION TO SIMULATION
Module outline:
• What is Simulation?
• Simulation Terminology
• Components of a System
• Models in Simulation
• Typical applications
• References
WHAT IS SIMULATION?
simulation may be defined as a technique that imitates the operation of a real world
system or processes as it evolves over time. It involves the generation of an artificial
history of the system and observation of that artificial history to obtain information and
draw inferences about the operating characteristics of the real system. Simulation
educates us on how a system operates and how the system might respond to changes. It
enables us to test alternative courses of action to determine their impact on system
performance. Before an alternative is implemented, it must be tested. Although
performing tests with the “real thing” would be ideal. This is seldom practically feasible.
The cost associated with changing/improving a system may be very high both in the
term of capital required to implement the change and losses due to interruption in
production operations and other losses. In most cases experimentation with the
proposed alternative is practically impossible. In addition, as the cost of proposed
changes (alternative solutions) increase, so does the cost of physically experimenting.
As an example, suppose a heavy-duty conveyor is being considered as an alternative to
the existing material handling method (by trucks) for improving productivity and
speeding up the production operations in a factory (seeFigre3). It is obvious that
installing the proposed conveyor on a test basis would probably not be cost effective.
Therefore, experimentation with alternative configurations would be practically
impossible. In stead, experimentation with a representative model of the system would
probably make more sense.
Simulation is a means of experimenting with a detailed model of a real system to
Determine how the system will respond to changes in its environment, structure, and its
underlying assumption [Harrel (1996)]. Management Scientist uses a wide variety of
analytical tools to model, analyze, and solve complex decision problems. These tool
include linear programming, decision analysis, forecasting, Queuing theory and
Alternative 1: Use lift-truck
2
Point A Point B
(Warehouse) (Factory)
Alternative 2: use a conveyor
Point A
(warehouse ) . . . . . . . . Point B
...
An integrated inventory optimisation model for facility location allocation p...Ramkrishna Manatkar
This document presents a mathematical model for an integrated inventory optimization problem for a multi-echelon supply chain network. The model considers inventory, transportation and location decisions with the objectives of minimizing total inventory holding and transportation costs while meeting customer service level requirements. The model is formulated as a multi-objective non-linear integer programming problem to determine optimal assignments of retailers to distribution centers, safety stock levels at each facility, regular stock levels, and maximum inventory levels at each echelon. The model is tested on real data from steel industry supply chains to provide practical guidelines for inventory management and distribution network design.
An Activity Based Costing-Based A Case Study Of A Taiwanese Gudeng Precisio...Kelly Lipiec
This document discusses implementing an activity-based costing (ABC) system at a Taiwanese semiconductor process equipment factory (SPEF) to more accurately estimate product costs. Currently, the SPEF uses a traditional costing system that does not precisely reflect production costs. The ABC system would assign costs to products based on the activities and resources required to produce them. An analysis of the SPEF's current cost structure is provided. The document proposes developing an ABC model for the SPEF and comparing the results to the traditional system to analyze benefits and obstacles of the ABC approach.
IJPR (2015) A Distance-based Methodology for Increased Extraction Of Informat...Nicky Campbell-Allen
This document describes a new methodology for incorporating information from the roof matrices in Quality Function Deployment (QFD) studies. The roof matrices contain correlations between customer requirements (voice of customers) and technical characteristics, but existing methods for including this information in QFD analyses have limitations. The proposed new methodology uses the Manhattan Distance Measure to integrate roof matrix correlation data into the final weightings of technical characteristics. This provides a more consistent way to select technical characteristics by identifying those that are negatively or positively correlated. The methodology is demonstrated using a published QFD case study.
The document summarizes key concepts related to failure and repair rates in manufacturing industries. It defines reliability as the probability a system will perform as intended without failure for a given period of time. Availability accounts for both reliability and how quickly a system can be repaired. It also defines failure rate, repair rate, and different types of availability like point availability and mean availability. Maintainability is defined as how easily and quickly a system can be restored after failure.
Investigating and Classifying the Applications of Flexible Manufacturing Syst...IOSR Journals
The recent manufacturing environment is characterized as having diverse products due to mass
customization, short production lead-time, and unstable customer demand. Today, the need for flexibility, quick
responsiveness, and robustness to system uncertainties in production scheduling decisions has increased
significantly. In traditional job shops, tooling is usually assumed as a fixed resource. However, when tooling
resource is shared among different machines, a greater product variety, routing flexibility with a smaller tool
inventory can be realized. Such a strategy is usually enabled by an automatic tool changing mechanism and tool
delivery system to reduce the time for tooling setup, hence allows parts to be processed in small batches. In this
research, a dynamic scheduling problem under flexible tooling resource constraints is studied. An integrated
approach is proposed to allow two levels of hierarchical, dynamic decision making for job scheduling and tool
flow control in Automated Manufacturing Systems. It decomposes the overall problem into a series of static subproblems
for each scheduling window, handles random disruptions by updating job ready time, completion
time, and machine status on a rolling horizon basis, and considers the machine availability explicitly in
generating schedules. Two types of manufacturing system models are used in simulation studies to test the
effectiveness of the proposed dynamic scheduling approach. First, hypothetical models are generated using
some generic shop flow structures (e.g. flexible flow shops, job shops, and single-stage systems) and
configurations(Insup,Um.,et al.,2009).They are tested to provide the empirical evidence about how well the
proposed approach performs for the general automated manufacturing systems where parts have alternative
routings. Second, a model based on a real industrial flexible manufacturing system was used to test the
effectiveness of the proposed approach when machine types, part routing, tooling, and other production
parameters closely mimic to the real flexible manufacturing operations.
This document summarizes a research paper about optimally managing lead times in supply chains. The paper studies a serial inventory system where units can be shipped from any stage to any downstream stage, providing flexibility. This is compared to traditional systems where units must progress sequentially through stages. The paper shows that optimal policies have a threshold-based structure called "extended echelon base stock policies." These involve a threshold for shipping from each origin stage to each destination stage. The thresholds can be computed by decomposing the problem into single-unit subproblems. Overall, the paper characterizes optimal policies for dynamically adjusting lead times in multi-echelon supply chains.
An Updated Analysis of Optimal Carbon PricingJeff Mollins
This document summarizes an analysis that updates an earlier model of optimal carbon pricing using more recent data. The analysis uses an intertemporal model that considers both flow and stock externalities of carbon emissions. It extends the previous model by including the effects of carbon sinks and more realistic assumptions about emission flow externalities. The results indicate that optimal carbon taxes have increased significantly since the earlier analysis and that higher levels of emissions abatement are needed to decrease net emissions, likely due to a lower long-term price elasticity of demand for fossil fuels. Sensitivity analysis shows variation in optimal policy and delay times based on the cost of abatement.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
1. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
Explicit Filters and Supply Chain Design
D.R. Towill1
, M. R. Lambrecht2
, S.M. Disney1
and J. Dejonckheere2
(1) Logistics Systems Dynamics Group,
Cardiff Business School,
Cardiff University, Aberconway Building,
Colum Drive,
Cardiff, CF10 3EU, UK.
DisneySM@Cardiff.ac.uk
D.R. Towill Fax: +44(0)29 2084 2292
(2) Department of Applied Economics,
Katholieke Universiteit Leuven,
Naamsestraat 69,
B – 3000 Leuven,
Belgium.
Jeroen.Dejonckheere@econ.kuleuven.ac.be
Marc.Lambrecht@econ.kuleuven.ac.be
Abstract
Due to the complexity of present day supply chains, it is important to select the simplest
supply chain scheduling Decision Support System (DSS) which will determine and place
orders satisfactorily. We propose to use a generic design framework, termed the explicit filter
methodology, to achieve this objective. In doing so we compare the explicit filter approach to
the implicit filter approach utilised in previous OR research which focused on minimising a
cost function. Although the results may well be similar with both approaches it is much
clearer to the designer, why and how, a scheduling system will reduce the Bullwhip Effect
when designed via the explicit filter approach.
Key Words: HMMS algorithm, explicit filter, aggregate planning, decision support systems
Introduction
Decision-making in manufacturing systems is a dynamic process based on accessing system
states including inventory levels and production rates. The basic principles involved in this
dynamic process are equally applicable to both the control of individual businesses, and to
complete supply chains. At the heart of the decision-making process is the desire to ensure
that the system correctly identifies and tracks genuine variations in demand (in order to
ensure high customer service levels). At the same time the decision-making process is
expected to detect and reject rogue variations in demand (in order to avoid excess costs due
to unnecessary ramping of production up/down and the associated swings between stock-outs
and excess stocks).
Supply chains are composed of a sequence of manufacturing (value added) systems extending
from the identification of customer need right through to that need being satisfied. Simulation
by Jay Forrester (1961) has long ago established that the more extended the chain, the worse
the dynamic response of that chain. Hence global operations can be particularly at risk from
poor systems design. It has since been rigorously proved that supply chains can be
mathematically described by a set of individual manufacturing systems coupled together in a
precise and particular way. As a consequence of this proof, it has been shown possible to
predict the dynamic response for an entire traditional supply chain from a knowledge of the
filter characteristics of each component system without resort to simulation (Towill & Del
Vecchio, 1994). Furthermore, this supply chain configuration has also enabled the Law of
2. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
Industrial Dynamics originally established by Burbidge (1984) on an empirical basis, to be
put on a rigorous mathematical foundation.
The relevance of the explicit filter approach to supply chain design is emphasised by the
current move towards expanding global operations, the pressure to enable agile supply, and
the realisation that minimising a cost function at an individual echelon within the chain is not
the way to optimise response (or profitability) of the entire system. These pressures combine
to force the supply chain champion to adopt, market, and implement a holistic solution
(McHugh et. al., 1995). A consequence is the realisation that it is the total supply chain that
competes for business (Christopher, 1997). So the focus is changing from local cost
minimisation to enabling the complete chain to operate smoothly and thereby effectively. DSS
have a major role to play in achieving this objective (Towill & McCullen, 1999). Therefore
selecting the appropriate DSS to suit the needs of the supply chain becomes a critical part of
the overall design process (Dejonckeere et al, 2000). It is the purpose of this paper to show
how choice of such a DSS is greatly simplified via the “explicit filter” approach.
OR and DSS Selection
The study of DSS has a long history particularly amongst the Operations Research
community. This dates back to the classic HMMS algorithm (Holt et al, 1960) and its many
variants and updates typified by Jones (1967), Bertrand (1986) and Lambrecht et al, (1982).
Corresponding mathematical analysis based on what has become known as the
servomechanism approach started with Tustin (1952). Early approaches concentrated on
attempting to control the dynamic response via placement of the system poles (i.e.. roots of
the characteristic equation) using Laplace Transforms (Simon, 1953) and a short while later z
Transforms (Vassian, 1954). The problem was subsequently re-cast in the form of
minimising a cost function composed of storage and production adaptation costs (Adelson,
1966 and Deziel & Eilon, 1967). Despite the considerable difficulty in establishing a realistic
cost model, this HMMS style approach became extremely popular. We term it the implicit
filter design method because it concentrates on minimising the cost function and hoping that
the consequential filter performance is acceptable. Table I compares the salient features of
the OR and Filter approaches. Note that both could result in similar optimal designs for a
manufacturing system, but the routes to accomplishment are very different.
Our methodology is based on the same mathematical principles as the OR approach. However
we reverse the procedure since we are concerned upfront with explicit filter design in which
good dynamic performance is a prerequisite. The argument here is that cost control follows
from good dynamic design, to ensures high customer service level currently with small
swings in capacity requirements. At the heart of the explicit filter design approach is the
complete understanding of how the system transfer function governs dynamic response. For
example, the dominant roots of the characteristic equation are important factors in
determining system bullwhip, but it is only one factor. Thus a conservative placement of the
dominant poles of the feedback loop can fail to adequately dampen the bullwhip in the
presence of exponential smoothing of feed-forward demand (Towill, 1982). In system
engineering terms, pole-placement as a controller of dynamic response breaks down in the
presence of predictive elements within the system. In contrast, transfer function techniques
are all embracing. Hence given the transfer function of a manufacturing system the filter
characteristics are uniquely defined for all inputs. So the core design problem becomes
“given the expected inputs and desired outputs, what transfer function will deliver the
necessary performance?”
3. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
The Concept of the ‘Ideal’ Filter
Dynamic systems must be designed to follow command signals (the ‘signal’), and yet at the
same time reject the unwanted disturbances (the ‘noise’). To achieve this objective, systems
may be designed using time domain concepts, or via frequency domain concepts. In either
case, the required performance assessment can be obtained via mathematical analysis (at least
in theory), or by simulation. An advantage of the filter concept is that it forces the “client”
and the system designer to discuss and think carefully about their definitions of “command”
and “noise” signals as appropriate to this specific application. For example, do we or do we
not consciously change workforce levels for weekly changes in demand? Should the answer
be “no” then the designer needs to ensure that weekly variations are adequately filtered out by
the ordering algorithms. If the answer is “yes” then some volatility is permitted.
The concept of the ideal filter may thus be represented as an envelope of amplitude ratio
values that are either one or zero. This is depicted in Fig. 1, which is for the particular case of
the Low Pass Filter. For instance, in supply chains this might correspond to the desired
relationship between marketplace consumption and consequential orders placed on the
factory. So slow stable changes in demand pattern would be considered important, and hence
“followed”. In contrast rapid fluctuations would be seen as random “noise” and be filtered
out.
System Trade-Off Between Capacity Requirements and Stock Fluctuations
Of course, the frequency bands of signal and noise are rarely known with certainty. Indeed,
their ranges may overlap. Nevertheless the ‘ideal filter’ (and especially the departure of a
‘sup-optimal filter’ from this target) is a useful concept in system design and has already been
successfully included in a manufacturing system CAD design suite (Edghill and Towill,
1989). Here the departure of the candidate filter from the ideal filter is used, for a suitable
range of frequencies, as input data to a dynamic performance index to be minimised in the
search for the ‘best’ compromise.
Fig. 2 shows how the filter concept may be applied. The graphs are equally applicable to
individual manufacturing systems, individual non-manufacturing but value-added activities,
and to complete supply chains. It is also possible to deliberately engineer the supply chain to
enable substantially different dynamic responses at different echelons within the chain.
Typical of such an approach is the leagile supply chain which has a carefully selected material
flow decoupling point, usually based on product configuration considerations (Naylor, et al,
1999). Upstream of the decoupling point, orders conform to the Level Scheduling mode.
Downstream of the decoupling point (i.e. nearer the marketplace), orders conform directly to
end customer requirements.
In Fig. 2 we compare the waveforms to be expected at discrete frequencies selected either side
of the cut-off frequency where the “ideal” filter would switch from “transmit” to “reject”.
Here the “order-up-to” model (Lee et al, 1997) amplifies the order pattern across the
frequency range, and has volatile stock behaviour also. In contrast Level Scheduling (Suzaki,
1987) just takes a scythe to all order variations by holding the production order rate constant,
4. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
and absorbs all marketplace fluctuations from stock. Conversely, passing-on-orders (used as
an MIT Beer Game benchmark by Sterman, 1989) tries to maintain stock levels constant by
ordering the factory to respond directly to the marketplace. But the “ideal filter” switches so
that it behaves as level scheduling at high frequencies but passes-on-orders at low
frequencies.
In theory the “ideal” filter experiences the best of both worlds i.e. respond only to real
changes in demand, but buffer out and take from stock the random changes that do not
influence actual needs. In practice the “ideal” filter cannot be physically realised, some
rounding of the corners is inevitable although a good approximation is achievable and we
thereby design an ordering algorithm that gives a good compromise. Such a practical filter
response is also shown in Fig. 2. Hence although there are production order rate swings both
above and below the idealised cut-off frequency, they approach the target behaviour in both
cases. Thus low frequency marketplace signals are reasonably tracked, whilst high frequency
“noise” is reasonably rejected. However, such a practical filter cannot be arbitrarily selected;
it has to be matched to customer and system needs, otherwise the “Forrester” effects or
demand amplification (now known as “bullwhip”) and rogue seasonality results (Berry and
Towill, 1995).
Synergies with Aggregate Planning Decision Making
Aggregate planning increases the range of alternatives for capacity usage that must be
considered by management and our review follows the description by Buffa (1969). Despite
the passage of time, his statement of the problems faced by management remain largely
unchanged. The term “aggregate planning” includes scheduling in the sense of a programme;
the terms “aggregate planning” and “aggregate planning and scheduling” are used almost
interchangeably. The economic significance of these ideas is by no means minor. The
concepts raise such broad basic questions as: To what extent should inventory be used to
absorb the fluctuation in demand? Why not absorb all these fluctuations by simply varying
the size of the work force? Why not maintain a fairly stable work force size and absorb
fluctuations by changing production rates through varying work hours? Why not maintain a
fairly stable work force and production rate and let subcontractors wrestle with the problem of
fluctuating order rates? Should the business purposely not meet all demands or seek to
smooth them via price changes (Hay, 1970). In most instances it is probably true that any one
of these extreme strategies would not be as effective as a balance among them. Each strategy
has associated costs and, therefore, we seek an astute combination of the alternatives.
If inventories are used to absorb seasonal changes in demand, capital and obsolescence costs
as well as the costs associated with storage, insurance, and handling tend to increase. Beyond
the question of seasonal factors, the use of inventories to absorb short-term fluctuations incurs
increases in these same costs compared to some ideal or minimum inventory level necessary
to maintain the production process. When inventories fall below this ideal or minimum level,
stock-out costs and all costs associated with short runs increase. Changes in the size of the
work force affect the total costs of labour turnover. When new workers are hired, costs arise
from selection, training, and lower production effectiveness. Learning curve factors may then
become dominant. The firing of workers may involve unemployment compensation or other
separation costs, as well as an intangible effect on public relations and public image. Large
changes in the size of the work force may mean adding or eliminating an entire shift; the
incremental costs involved are shift premiums as well as incremental supervision costs and
other overheads.
5. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
If fluctuations are absorbed through changes in the production rate, overtime premium costs
for increases and probably idle labour costs (higher average labour cost per unit) for decreases
also will be absorbed. Usually, however, managers try to maintain the same average labour
costs by reducing hours worked to somewhat below normal levels. When under-time
schedules persist, labour turnover and the attendant costs are likely to increase. Many costs
affected by aggregate planning and scheduling decisions are difficult to measure and are not
segregated in accounting records. Some, such as interest costs on inventory investment, are
alternatives costs of opportunity. Other costs such as those associated with public relations
and public image (the role of the good employer) are now measurable directly. However, all
of the costs are real and bear on aggregate planning decisions.
In order to achieve a good balance between the conflicting demands of production and
inventory, the OR community has developed a raft of solutions to this problem. These
include those by Holt, et al, (1960) subsequently known as the HMMS algorithm and others
by Jones, (1967), Orr (1962), Elmaleh and Eilon (1974), Silver (1974) and Lambrecht et al
(1982). Whilst all of these propositions undoubtedly helped to smooth production (sometimes
under very restricted conditions of demand volatility), it was in all cases achieved indirectly.
That is, a cost function (again sometimes dubious in validity) was optimised and it was later
found that some smoothing actually resulted from using the resulting algorithm. We shall
now take a fresh look at one of these classic algorithms from a frequency domain perspective.
HMMS Revisited
In Fig. 3 we have reconstructed the famous HMMS application using their Paint Factory Data.
Time Series representing the consumption order rate inventory level and workforce level are
also shown. The Coefficient of Variation (Fransoo and Wouters, 2000) has been calculated as
a metric summarising the smoothing capability of the algorithm. The HMMS is clearly
effective. Yet Holt et al (1960) content themselves with statements as “With a perfect
forecast (sic) the decision rule avoids, almost completely, sharp month-to-month variations in
productions, but responds to fluctuations in orders that have a duration of several months”.
Their statement implies that there is a break point frequency corresponding to a period of
(say) 3 months. As we shall now see, obtaining the frequency response characteristics is
much more insightful.
Figure 4 summarises the frequency response characteristics implicit in the HMMS algorithm
applied to the Paint Factory scenario. We have obtained this estimate via Fourier analysis of
the time series. Hence for a range of integer frequencies we can calculate the amplitude
associated with a particular series. Then for a particular spot frequency the amplitude ratio is
simply the relationship between the CONSUMPTION (the system driver) and (say) ORDER
RATE (the resultant dependent variable. To simplify presentation, Fig 4(a) relates the Fourier
Coefficients for ORATE and CONS, whilst Fig. 4(b) similarly relates LABOUR and CONS.
We have also superimposed, for comparison, the guidelines for perfect rejection and perfect
tracking. Also shown are the somewhat arbitrary “best fit” ideal filters for both cases.
From Fig. 4 and related frequency response plots we are able to “explicitly” describe the
filtering responses of the HMMS algorithm. For example, for ORATE, to a first
approximation we may regard the transition as occurring at a period of 8 months, i.e. shorter
periods are “rejected” and longer periods “tracked” i.e. HMMS is “better” then implied by
Holt et al, (1960). Of course the switch is not instantaneous, but the concept gives us a real
6. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
feel for the filter characteristics of the HMMS algorithm. It is the up front consideration of
the dynamic properties of scheduling algorithm which distinguishes the filter and OR
approaches. In contrast the “switch” occurs at a period of 16 months for LABOUR. Shorter
periods are rejected. This is as expected, and means that the Paint Company is basically
planning the workforce over an annual period. Shorter fluctuations are then met by a
combination of inventory usage and overtime/under time working.
Relevance of the “Old” Aggregate Planning Approach to the “New” Supply Chain
Dynamics
We saw how Aggregate Planning Decision making was posed by the OR community as a
balancing problem between the competing needs of production control and inventory
management. Later the filter characteristics of one esteemed (HMMS) algorithm were
determined a posterioi from time series generated from the classic paint factory case study.
Why is this relevant to modern supply chain dynamics? The answer is contained in Fig. 2,
since there is no basic difference (except time scale) between aggregate planning decision
making and ordering algorithms relevant to modern supply chains such as those one discussed
in Dejonckheere, Disney, Lambrecht and Towill, (2000 and 2002) which has exploited the
explicit filter approach. Thus Fig. 2, in one simple sketch, has covered the whole spectrum of
possibilities from which the supply chain designer can select the appropriate decision rules.
For example we showed that a Fixed Production Rate corresponds to Lean Production; Pass
on Orders corresponds to Agile Production; and Leagile Production is the appropriate mix of
Lean and Agile modes. Note that here we have used the description “Production” to describe
a value-added process within a supply chain. Our arguments clearly apply to such other
value-added processes as Acquisition and Delivery and in new configurations such as VMI.
These value-added processes are then combined (and ideally integrated, Stevens, 1989) to
deliver in accordance with end-customer expectations. Hence in seeking modern supply chain
solutions we need not entirely abandon lessons learned from the aggregate planning research
era. In particular, our practical explicit filter dynamically relates to the HMMS algorithm.
So far we have been discussing what is common between aggregate planning and modern
supply chain operations. But since the early breakthrough in supply chain dynamics was
made by Jay Forrester in 1958, it is reasonable to also highlight the differences. Christopher
and Towill (2000) have argued that the last two decades have seen major changes driven by
marketing and customer pressures. The result is a move away from “traditional” supply
chains (we know what the customer wants) through “product driven” supply chains (we
supply what we think the customer wants) to market driven supply chains (we supply what is
selling well) to “customised” supply chains (we supply what the individual customer wants).
Quite apart from the general driving out of waste using Lean Thinking Principles (Womack
and Jones, 1996) there has been tremendous pressure to increase the speed of response of the
delivery process (Lowson et al, 2000). It also needs to be matched to what the customer
actually wants, and hence avoid obsolescence via excess stock holding in the chain. So it is
back to the same balancing problem already met in aggregate planning.
Conclusion
This paper has compared a modern supply chain design procedure (termed the explicit filter
approach) to a traditional cost minimisation approach that has been applied to production
scheduling (termed the implicit filter approach). Although the results of both design
7. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
strategies were broadly similar in both cases, the route to the solution was different. The
implicit filter approach, developed to minimise a pseudo cost function has the helpful side
effect of reducing Bullwhip. The explicit filter approach was developed with the specific aim
of investigating and avoiding the Forrester Effect or Bullwhip Effect and as a helpful side
effect that it also reduces a pseudo cost function. The real advantage, however, with the
explicit filter approach is that it is possible to determine where (in the frequency range) and
how much (the amplitude) Bullwhip is generated by the scheduling algorithm, whereas, with
the implicit filter approach based on cost minimisation this is not so transparent. Thus
shaping the response to meet supply chain requirements is easier since it becomes central to
our design methodology.
References
Adelson, R.M., 1966. “The dynamic behaviour of linear forecasting and scheduling rules.”
OR Quarterly 17(4), 447-462.
Berry, D., and Towill, D.R., 1995. “Reduce costs - use a more intelligent production and
inventory policy.” BPICS Control Journal, 21(7), 26-30.
Bertrand, J.W.M., 1986. “Reducing production level variations and inventory variations in
complex production systems.” Int. Jnl. Prod. Res. (24(5), 1059-1074.
Buffa, E.S., 1969. Modern Production Management. John Wiley and Sons, Inc. New York.
Burbidge, J.L., 1984. “Automated production control with a simulation capability.” Proc.
IPSP Conf. WG 5-7, Copenhagen.
Christopher, M. and Towill, D.R., 2000. “Supply chain migration – from lean and functional
to agile and customised.” Int. Jnl. SCM. 5(4), 206-213.
Christopher, M., 1997. Marketing Logistics. Butterworth Heinemann, Oxford.
Dejonckheere, J., Disney, S.M., Lambrecht, M.R. and Towill, D.R., 2000. “Matching your
orders to the needs and economics of your supply chain.” Proc. EUROMA 2000 Conf,
174-181.
Dejonckheere, J., Disney, S.M., Lambrecht, M.R. and Towill, D.R., 2002. “Transfer
function analysis of forecasting induced bullwhip in supply chains.” To be published,
Int. Jnl. Prod. Econ.
Edgehill, J.S., and Towill, D.R., 1989. “Dynamic behaviour of fundamental manufacturing
system design strategies.” Annals of CIRP, 38, 465-469.
Elmaleh, J., and Eilon, S., 1974. “A new approach to production smoothing.” Int. Jnl.
Prod. Res., 12(6), 673-681.
Forrester, J., 1958. “Industrial dynamics, a major breakthrough for decision makers”,
Harvard Business Review, July-August, 67-96.
Forrester, J., 1961. Industrial dynamics, Cambridge MA, MIT Press.
Fransoo, J.C., and Wouters, M.J.F., 2000. “Measuring the Bullwhip Effect in the Supply
Chain.”, Int. Jnl. Sup. Ch. Man. 5(2), 78-89.
Hay, G.A., 1970. “Production, price, and inventory theory.” American Economic Review,
60(4), September, 531-45.
Holt, C.C., Modigliani, F., Muth, J.F and Simon, H.A., 1960. Production Planning,
Inventories, and Workforce. Prentice-Hall, Englewood Cliffs, NJ.
Jones, C.H., 1967. “Parametric production planning.” Man. Sci. 13(11), 843-866.
Lambrecht, M.R., Luyten, R. and Eecken, J.V., 1982. “The production switching heuristic
under non-stationary demand.” ENCOST, 7, 55-61.
Lee, H.L., Padmanabhan, V. and Whang, S. 1997 “The bullwhip effect in supply chains.”
Sloan Management Review, Spring, pp 93-102.
8. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
Lowson, R., King, R., and Hunter, A., 1999. Quick response managing the supply chain to
meet consumer demand. John Wiley and Sons. Ltd., Chichester.
McHugh, P., Merli, G. and Wheeler, W.A., III, 1995. Beyond Business Process Re-
Engineering. John Wiley Inc. Chichester.
Naylor, J.B., Naim, M.M. and Berry, D., 1999. “Leagility: Integrating the lean and agile
manufacturing paradigms in the total supply chain”, International Journal of Production
Economics, Vol. 62, pp107-188.
Orr, D., 1962. “A random walk production-inventory policy”, Mgt. Science., 9, 109-??
Silver, E.A., 1974. “A control system for co-ordinated inventory replenishment.” Int. Jnl.
Prod. Res, 12(6), 647-671.
Simon, H.A., 1952. “On the Application of Servomechanism Theory to the Study of
Production Control”, Econometrica, Vol. 20, pp 247-268.
Sterman, J., 1989. “Modelling managerial behaviour: Misperceptions of feedback in a
dynamic decision-making experiment”, Management Science, 35, (3), pp 321-339.
Stevens, G., 1989. “Integrating the supply chain”, International Journal of Physical
Distribution and Logistics Management, Vol. 19, No. 8, pp3-8.
Suzaki, K., 1987. The New Manufacturing Challenge. The Free Press, New York.
Towill, D.R. and Del Vecchio, A., 1994. “The application of filter theory to the study of
supply chain dynamics.” Prod. Plan. and Cont, 15(1), pp 82-96.
Towill, D.R. and McCullen, P., 1999. “The impact of an agile manufacturing programme on
supply chain dynamics.” Int. Jnl. Log. Man. 10(1), 83-96.
Towill, D.R., 1982. “Dynamic analysis of an inventory and order based production control
system.” Int. Jnl. Prod. Res. 20, 671-687.
Tustin, A., 1952. The Mechanism of Economic Systems. Heinemann Ltd., London.
Vassian, H.F., 1955. “Application of discrete variable servo theory to inventory control.”
Jnl. ORSA, 3(3), 272-282.
Womack, J.P., and Jones, D.T., 1996. Lean Thinking. Simon and Schuster, NY.
9. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
CHARACTERISTICS OR APPROACH FILTER APPROACH
System Model Integral/difference
equations
Transfer functions
Typical Assumed
Stimuli
Random excitation Sinusoidal excitation
Methods of Analysis s/ z transforms
Probability theory
s/ z transforms
Fourier transforms
Performance Criteria Production/inventory
variances
Production/inventory power
spectra
Optimisation
Procedure
Minimise quadratic cost
function
Minimise deviation from
“ideal” filter
Design Emphasis Implicitly smooth
production/inventory
swings
Explicitly smooth
production/inventory swings
Bullwhip
Consequences
Somewhat arbitrary Reduce by design
Financial Implications Precise according to
cost function
Somewhat arbitrary
Table I Comparison of the OR and Filter Approaches to DSS Selection in Supply Chain
Design
2
1
0
“Message” frequency
range
Cut -off frequency wc
“Noise” frequency
range
0 p
Frequency
AmplitudeRatio
Fig 1 The “Ideal” Low Pass Filter
10. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
Ordering Strategy
Production
order rate
Inventory
swings
Demand
pattern
Or
fixed production rate
(Level Scheduling)
Or
fixed inventory
level
(Pass on orders)
Or
practical
filter
Either
“Order-up-to“
Low frequency demand
Or
ideal
filter
Production
order rate
Inventory
swings
Demand
pattern
High frequency demand
Fig. 2. Application of Filter Concept to Supply Chains
11. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.
12. Towill, D.R., Lambrecht, M.R., Disney, S.M. and Dejonckheere, J., (2001), "Every supply chain is a filter", in “What really
matters in Operations Research”, Proceedings of the 8th EUROMA, Vol. 1, June 3-4, Bath, UK, pp401-411, ISBN 1 85790
088X.