This document provides a framework for modeling and controlling manufacturing systems using concepts from systems and control theory. It introduces key manufacturing metrics like throughput, flow time, work-in-process (WIP), and utilization. Effective process times can be used to model manufacturing systems as queuing networks. For mass production, systems can be modeled as linear systems subject to nonlinear constraints called clearing functions, which capture the tradeoff between throughput and flow time. Model predictive control techniques can then be used to design controllers for these manufacturing system models based on the clearing function representations.
- Oral cytology has historically been used to study oral cancer and precancerous lesions, beginning with comparative studies of oral and cervical cytology in the 1940s and 1950s.
- Techniques evolved from simple smears to include spatulas, brushes, and scrapings to improve cell collection from deeper layers of the oral mucosa. However, oral cytology was limited in its ability to screen for early cancers compared to cervical cytology due to a lack of a defined transformation zone in the oral cavity.
- The introduction of the oral brush in the 1980s significantly improved oral cytology by allowing collection of cells from deeper mucosal layers with minimal invasion and improved accuracy in detecting dysplastic lesions. While
Plants contain thousands of compounds and are a valuable source of new drugs. High-performance thin-layer chromatography (HPTLC) is a simple and economical analytical method useful for characterizing herbal compounds. HPTLC provides better separation and repeatability compared to traditional thin-layer chromatography. It can be used to develop standardized herbal extracts, isolate pure compounds, and determine compound structures. HPTLC is also used to create chemical "fingerprints" of herbal products for quality evaluation and authentication. The HPTLC process involves selecting a stationary phase, applying samples, developing the plate, detecting and quantifying separated compounds, and documenting results.
This document provides a detailed overview of the MRI anatomy of the anorectal region in 3 sentences or less:
The document describes the MRI appearance and anatomy of structures in the anorectal region such as the internal and external anal sphincters, levator ani muscles, and intersphincteric spaces. Key imaging planes such as mid-coronal and mid-sagittal are discussed for visualizing the layers and relationships between anatomical structures in the region. Standardized terminology is introduced for communicating imaging findings, including use of the "anal clock" for describing sites of fistulas and other lesions.
Handbook of healthcare operations managementSpringer
The document discusses queueing models for healthcare operations. It begins by introducing common queueing terminology like customers, service facilities, servers, and performance metrics like wait time. It then summarizes key results for several basic single-server queueing models including M/M/1, M/G/1, and GI/G/1. These models provide useful insights for healthcare managers despite making simplifying assumptions compared to real-world systems.
This document discusses the advantages of blastocyst stage embryo transfer compared to cleavage stage embryo transfer, particularly in the context of oocyte donation. Key points include:
- Blastocyst stage transfer has been shown to improve outcomes like implantation and pregnancy rates compared to cleavage stage transfer based on evidence from IVF studies.
- In oocyte donation specifically, studies have found significantly higher implantation and pregnancy rates per embryo transfer with blastocyst stage transfer compared to cleavage stage.
- The development of improved embryo culture systems has made successful blastocyst development and transfer more routine. Blastocyst transfer also allows more information about embryo developmental potential.
Handbook of growth and growth monitoring in health and diseaseSpringer
This chapter discusses the concepts of phenotypic induction and programming, which refer to the long-term effects of environmental stimuli during early life periods of development. The chapter presents a model showing how metabolic capacity develops during fetal life and infancy through critical windows, while metabolic load emerges from childhood onwards from traits like tissue mass and diet. A high ratio of metabolic load to capacity increases risk of diseases by altering blood pressure, insulin, and lipid metabolism. Studies show individuals born small who become large as adults have highest risk. Overall growth patterns, not any single period, determine later disease risk as load interacts with capacity throughout life.
The document discusses representing relational spatiotemporal data using information granules. It proposes:
1) Describing the relational data using a vocabulary of granular descriptors formed from Cartesian products of spatial, temporal, and signal information granules. This granular representation provides an interpretable perspective on the data.
2) Analyzing the capabilities of different vocabularies to capture the essence of the data through the processes of granulation and degranulation, where the original data is reconstructed from its granular representation. The quality of reconstruction is used to optimize the vocabulary.
3) Extending the approach to analyze evolvability of the granular description as the relational data changes across consecutive
This document discusses the development of organized markets for trading patents. It argues that patents have traditionally been analyzed as providing monopolies for integrated firms, but they can also be viewed as creating a competitive technology market. Patents have both an "investment value" for developing new technologies and a "blocking value" for preventing competitors' access. Most analysis has been static, but a dynamic system is proposed to study competitive bidding, gains from specialization, and price signaling. The system uses different auction mechanisms and assumes patents have legal validity. It aims to better allocate inventions by managing risks of new product development versus maintaining existing products. Historically, patent exchange has shifted from personal negotiations to potentially impersonal competitive bidding, as organized markets develop.
- Oral cytology has historically been used to study oral cancer and precancerous lesions, beginning with comparative studies of oral and cervical cytology in the 1940s and 1950s.
- Techniques evolved from simple smears to include spatulas, brushes, and scrapings to improve cell collection from deeper layers of the oral mucosa. However, oral cytology was limited in its ability to screen for early cancers compared to cervical cytology due to a lack of a defined transformation zone in the oral cavity.
- The introduction of the oral brush in the 1980s significantly improved oral cytology by allowing collection of cells from deeper mucosal layers with minimal invasion and improved accuracy in detecting dysplastic lesions. While
Plants contain thousands of compounds and are a valuable source of new drugs. High-performance thin-layer chromatography (HPTLC) is a simple and economical analytical method useful for characterizing herbal compounds. HPTLC provides better separation and repeatability compared to traditional thin-layer chromatography. It can be used to develop standardized herbal extracts, isolate pure compounds, and determine compound structures. HPTLC is also used to create chemical "fingerprints" of herbal products for quality evaluation and authentication. The HPTLC process involves selecting a stationary phase, applying samples, developing the plate, detecting and quantifying separated compounds, and documenting results.
This document provides a detailed overview of the MRI anatomy of the anorectal region in 3 sentences or less:
The document describes the MRI appearance and anatomy of structures in the anorectal region such as the internal and external anal sphincters, levator ani muscles, and intersphincteric spaces. Key imaging planes such as mid-coronal and mid-sagittal are discussed for visualizing the layers and relationships between anatomical structures in the region. Standardized terminology is introduced for communicating imaging findings, including use of the "anal clock" for describing sites of fistulas and other lesions.
Handbook of healthcare operations managementSpringer
The document discusses queueing models for healthcare operations. It begins by introducing common queueing terminology like customers, service facilities, servers, and performance metrics like wait time. It then summarizes key results for several basic single-server queueing models including M/M/1, M/G/1, and GI/G/1. These models provide useful insights for healthcare managers despite making simplifying assumptions compared to real-world systems.
This document discusses the advantages of blastocyst stage embryo transfer compared to cleavage stage embryo transfer, particularly in the context of oocyte donation. Key points include:
- Blastocyst stage transfer has been shown to improve outcomes like implantation and pregnancy rates compared to cleavage stage transfer based on evidence from IVF studies.
- In oocyte donation specifically, studies have found significantly higher implantation and pregnancy rates per embryo transfer with blastocyst stage transfer compared to cleavage stage.
- The development of improved embryo culture systems has made successful blastocyst development and transfer more routine. Blastocyst transfer also allows more information about embryo developmental potential.
Handbook of growth and growth monitoring in health and diseaseSpringer
This chapter discusses the concepts of phenotypic induction and programming, which refer to the long-term effects of environmental stimuli during early life periods of development. The chapter presents a model showing how metabolic capacity develops during fetal life and infancy through critical windows, while metabolic load emerges from childhood onwards from traits like tissue mass and diet. A high ratio of metabolic load to capacity increases risk of diseases by altering blood pressure, insulin, and lipid metabolism. Studies show individuals born small who become large as adults have highest risk. Overall growth patterns, not any single period, determine later disease risk as load interacts with capacity throughout life.
The document discusses representing relational spatiotemporal data using information granules. It proposes:
1) Describing the relational data using a vocabulary of granular descriptors formed from Cartesian products of spatial, temporal, and signal information granules. This granular representation provides an interpretable perspective on the data.
2) Analyzing the capabilities of different vocabularies to capture the essence of the data through the processes of granulation and degranulation, where the original data is reconstructed from its granular representation. The quality of reconstruction is used to optimize the vocabulary.
3) Extending the approach to analyze evolvability of the granular description as the relational data changes across consecutive
This document discusses the development of organized markets for trading patents. It argues that patents have traditionally been analyzed as providing monopolies for integrated firms, but they can also be viewed as creating a competitive technology market. Patents have both an "investment value" for developing new technologies and a "blocking value" for preventing competitors' access. Most analysis has been static, but a dynamic system is proposed to study competitive bidding, gains from specialization, and price signaling. The system uses different auction mechanisms and assumes patents have legal validity. It aims to better allocate inventions by managing risks of new product development versus maintaining existing products. Historically, patent exchange has shifted from personal negotiations to potentially impersonal competitive bidding, as organized markets develop.
This document discusses the use of graphical user interfaces and pulmonary graphics in neonatal respiratory care. It covers:
1) How graphics monitors can optimize mechanical ventilation parameters, evaluate infant effort and response to treatment, and assess respiratory waveforms, loops, and mechanics.
2) The components and features of graphical user interfaces, including how they provide real-time feedback and are useful teaching tools.
3) The types of graphic waveforms including pressure, flow, and volume and what each can show about ventilation and patient status.
4) Graphic loops, including pressure-volume and flow-volume loops and what they reveal about lung mechanics, compliance, resistance, and adequacy of ventilation.
The new public health and std hiv preventionSpringer
This document discusses social determinants of sexually transmitted infections. It explores how social factors like education, occupation, neighborhoods, and media can influence sexual behaviors and networks, thereby affecting STI spread. Key determinants of STI transmission include likelihood of transmission during sex, number of sexual partners, and partnership patterns. Factors like consistent condom use, access to healthcare, sex education, sexual network patterns, and timing of partnerships all influence STI rates at a population level.
Prominin new insights on stem & cancer stem cell biologySpringer
Prominin-2 is a pentaspan membrane glycoprotein that is structurally related to but encoded by a distinct gene from prominin-1. Both prominins are selectively concentrated in plasma membrane protrusions through direct interaction with membrane cholesterol. However, prominin-2 differs from prominin-1 in having a nonpolarized distribution across the apical, basal, and lateral membranes of polarized epithelial cells. Prominin-2 is also released in small extracellular vesicles from both the apical and basolateral surfaces, unlike prominin-1 which is only released apically. Insights into the distinctive roles of prominin-1 and prominin-2 may be gained by analyzing their evolutionary history and characteristics in other species.
This document discusses nanomaterials for biosensors and implantable biodevices. It describes how nanostructured thin films have enabled the development of more sensitive electrochemical biosensors by increasing detection of specific molecules. Two common techniques for creating nanostructured thin films are discussed - Langmuir-Blodgett films and layer-by-layer films using polyelectrolytes. These techniques allow control over film thickness at the nanoscale and have been used with various biomolecules to create biosensors for applications like glucose detection. Recent advances include using these materials and techniques to create miniaturized and implantable biosensors for real-time monitoring.
This document discusses laser drilling of Inconel alloys and assessing the quality of drilled holes. It presents the results of a study that used a factorial design experiment to investigate the effects of laser output energy, pulse length, focus setting, drilling ambient pressure, and workpiece thickness on hole geometry. Both qualitative and quantitative analyses were performed on the drilled holes. Key findings from the qualitative analysis included that ambient pressure had a significant effect on the amount of resolidified material around holes. The interactions of pressure-thickness and pressure-focus setting were also found to significantly impact hole features.
Anesthesia and perioperative care for aortic surgerySpringer
Acute aortic syndrome describes life-threatening conditions of the thoracic aorta including aortic dissection, intramural hematoma, and penetrating atherosclerotic ulcer. These conditions involve acute lesions of the aortic tunica media and can be distinguished by imaging techniques. Recent advances in imaging and treatment have increased awareness of these conditions and emphasized the importance of early diagnosis and management. The most common cause is aortic dissection, which occurs when blood tears the intimal layer, separating the media into an inner and outer layer. Other causes include intramural hematoma where blood enters the media without an intimal tear, and penetrating ulcers where atherosclerotic lesions erode the intima and media. Proper
This chapter introduces discrete and continuous dynamical systems through examples. Discrete examples include rotations and expanding maps of the circle, as well as endomorphisms and automorphisms of the torus. Continuous examples include flows generated by autonomous differential equations. Periodic points are also defined and analyzed for specific examples. Basic constructions for building new dynamical systems from existing ones are described.
This document discusses the use of cardiac magnetic resonance imaging (cMRI) in catheter-based radiofrequency ablation procedures for cardiac arrhythmias. It begins by providing background on the evolution of ablation procedures from initial fluoroscopy-based navigation to current electroanatomical mapping systems combined with imaging modalities like cMRI. It then focuses on specific applications of cMRI for atrial fibrillation and ventricular tachycardia ablation, including assessing anatomy, characterizing tissue, integrating with mapping systems, and evaluating safety. Details are provided on cMRI protocols, quantification of fibrosis, and using cMRI to stage atrial fibrillation and predict ablation outcomes.
This preface discusses the author's journey into physics and mathematics. As an undergraduate physics student, the author struggled to understand quantum mechanics but found that formalizing concepts mathematically helped. The author later became interested in quantum field theory and continued refining this textbook over many years by teaching students and incorporating their feedback. The preface thanks many colleagues for their role in improving the work and translating it to English.
This document summarizes Chapter 2 of a textbook on functional analysis in mechanics. It introduces Sobolev spaces, which are function spaces used to model mechanical problems. Sobolev spaces allow for generalized notions of derivatives of functions. The chapter discusses imbedding theorems for Sobolev spaces, which describe how functions in one Sobolev space can be mapped continuously or compactly to other function spaces. It provides examples of imbedding properties for specific Sobolev spaces over different domains.
Time domain finite element methods for maxwell's equations in metamaterialsSpringer
This section introduces discontinuous Galerkin (DG) methods for solving time-dependent Maxwell's equations in dispersive media and metamaterials. It first briefly reviews DG methods and their application to Maxwell's equations in free space. It then discusses several DG methods that have been developed for Maxwell's equations in dispersive media, including analyses of error estimates. Finally, it presents the modeling equations for electromagnetic propagation in cold plasma and outlines a semi-discrete DG scheme for solving these equations.
Information quality and management accountingSpringer
This document discusses management accounting and provides definitions of key terms:
1) It defines management accounting as collecting, recording, and reporting accounting and statistical data to decision makers, and outlines its roles in providing information.
2) It distinguishes between management accounting data, which are raw unstructured facts, and management accounting information, which is processed data presented in a useful way for decision makers.
3) Management accounting information can facilitate decisions by informing them, and can also influence decisions by shaping agendas and issues.
1) The document discusses generating random numbers with specified distributions for use in simulations and finance modeling.
2) It describes how linear congruential generators are commonly used to generate uniformly distributed random numbers by calculating values modulo a large integer.
3) Quality requirements for random number generators include having a long period before repeating, passing statistical tests for the desired distribution, and being uniformly distributed in multi-dimensional spaces without clustering along hyperplanes.
This document summarizes various congenital abnormalities and malformations that can occur in the spleen, including accessory spleens, intrapancreatic spleens, polysplenia, splenosis, and splenogonadal fusion. Accessory spleens are small nodules of splenic tissue that are present in around 10% of the population. Intrapancreatic spleens are rare and involve small fragments of splenic tissue entrapped within the pancreas. Polysplenia involves the spleen developing as separate lobes that do not fully fuse. Splenosis occurs after splenic trauma when fragments of splenic tissue implant elsewhere in the abdomen. Splenogonadal fusion is a rare condition where
This chapter discusses the essential requirements for designing a friction material composite system for brakes. It provides calculations for torque, braking ratio, and inertia that are needed as inputs for designing the friction material. Designing the material involves considering its coefficient of friction, stability over operating conditions, and other factors. The original equipment manufacturer would provide a design drawing with critical dimensions and specifications to use for testing and approval of the friction material design.
This document provides information about the Simulation Modelling & Analysis laboratory course for the 6th semester Industrial Engineering program. It includes 12 exercises that involve using simulation software like Arena and Microsoft Excel to model and analyze systems like queues, production lines, inventory systems, and networks. The exercises involve generating random data, fitting distributions, building simulation models, running simulations, and analyzing output statistics over multiple replications. The goal of the laboratory course is for students to develop simulation models and apply simulation methodology to solve industrial engineering problems.
Balancing WIP and Throughput with Machine Utilization in a Manufacturing Faci...IDES Editor
Lean manufacturing is a philosophy with a series of
techniques for any manufacturing company to get to the top of
production performance and to stay there. But for many still,
the teething pains of change and a steep climb are too much to
bear and sustain. There are a number of weak points in a
company that makes it difficult if not impossible to achieve the
promised gains for the efforts they put in. To implement lean
requires careful understanding and application of the concepts
of throughput, cycle time, work-in-process, and machine
utilization. This paper discusses some of the basic lean concepts
related to the throughput, cycle time, and WIP that companies
should follow when going through the transition to a lean
production system.
This document summarizes the results of simulating a manufacturing system model with three parts: arrival of components, processing of components, and the main production process. The simulation found a bottleneck at machine 2, with the longest queue times. Several proposals to improve system performance were explored using a process analyzer tool, with scenario 2 found to spread queue times more evenly and reduce bottlenecks at a reasonable cost, and scenario 5 found to give the best results by further reducing wait times and balancing utilization across resources.
This document presents a mathematical model for analyzing a generic single channel, multi-phase production line. The model aims to minimize system costs by reducing idle machine times and work-in-process inventory levels between machines. The model accounts for machine cycle times and calculates the times at which products enter and exit each machine in the production line. It assumes deterministic arrival rates and develops equations to determine the optimal level of service to minimize the total expected costs of providing service and of waiting for service.
Manufacturing models and metrics are used to quantify performance characteristics. Key metrics include production rate, cycle time, capacity, utilization, availability, lead time, and work-in-process. Mathematical models describe how these metrics are calculated for different production environments like batch, job shop, and flow production. Cost models classify costs as fixed or variable to determine total cost. Overhead rates are also calculated to determine hourly equipment and labor costs. Averaging procedures are described for calculating grand averages when multiple products are produced.
This document discusses worker-machine systems and different types of automation. It defines worker-machine systems as those involving a worker operating powered equipment. The key strengths of humans and machines in such systems are described. Different types of powered equipment, classifications, numbers of workers/machines, and levels of operator attention are outlined. The document provides examples of calculating cycle times, work requirements, and the effects of factors like learning, efficiency, defects, and availability. Overall it provides an overview of analyzing worker-machine systems and determining resource needs.
Lab 3 of 7 Process Management Simulation L A B O V E R V I E W.docxfestockton
Lab 3 of 7: Process Management Simulation
L A B O V E R V I E W
Scenario/Summary
Process Management Simulation (Part 3 of 3)
The objective of this three section lab is to simulate four process management functions:
process creation, replacing the current process image with a new process image, process state transition,
and
process scheduling
.
This lab will be due over the first three weeks of this course. The commander process program is due in Week 1. This program will introduce the student to system calls and other basic operating system functions. The process manager functions
“
process creation
,
replacing the current process image with a new process image
and
process state transition
“
are due in Week 2. The scheduling section of the process manager is due in Week 3.
You will use Linux system calls such as fork( ), exec(), wait( ), pipe( ), and sleep( ). Read man pages of these system calls for details.
This simulation exercise consists of three processes running on a Linux environment:
commander
,
process manager
, and
reporter
. There is one commander process (this is the process that starts your simulation), one process manager process that is created by the commander process, and a number of reporter processes that get created by the process manager, as needed.
1. Commander Process:
The commander process first creates a pipe and then the process manager process. It then repeatedly reads commands from the standard input and passes them to the process manager process via the pipe. The commander process accepts four commands:
1.
Q
: End of one unit of time.
2.
U
: Unblock the first simulated process in blocked queue.
3.
P
: Print the current state of the system.
4.
T
: Print the average turnaround time, and terminate the system.
Command
T
can only be executed once.
1.1 Simulated Process:
Process management simulation manages the execution of simulated processes. Each simulated process is comprised of a program that manipulates the value of a single integer variable. Thus the state of a simulated process at any instant is comprised of the value of its integer variable and the value of its program counter.
A simulated process™ program consists of a sequence of instructions. There are seven types of instructions as follows:
1.
S
n: Set the value of the integer variable to n, where n is an integer.
2.
A
n: Add n to the value of the integer variable, where n is an integer.
3.
D
n: Subtract n from the value of the integer variable, where n is an integer.
4.
B
: Block this simulated process.
5.
E
: Terminate this simulated process.
6.
F
n: Create a new (simulated) process. The new (simulated) process is an exact copy of the parent (simulated) process. The new (simulated) process executes from the instruction immediately after this (
F
) instruction, while the parent (simulated) process continues its execution n instructions after the next instruction.
7.
R
filename: Replace the program of .
This document discusses the use of graphical user interfaces and pulmonary graphics in neonatal respiratory care. It covers:
1) How graphics monitors can optimize mechanical ventilation parameters, evaluate infant effort and response to treatment, and assess respiratory waveforms, loops, and mechanics.
2) The components and features of graphical user interfaces, including how they provide real-time feedback and are useful teaching tools.
3) The types of graphic waveforms including pressure, flow, and volume and what each can show about ventilation and patient status.
4) Graphic loops, including pressure-volume and flow-volume loops and what they reveal about lung mechanics, compliance, resistance, and adequacy of ventilation.
The new public health and std hiv preventionSpringer
This document discusses social determinants of sexually transmitted infections. It explores how social factors like education, occupation, neighborhoods, and media can influence sexual behaviors and networks, thereby affecting STI spread. Key determinants of STI transmission include likelihood of transmission during sex, number of sexual partners, and partnership patterns. Factors like consistent condom use, access to healthcare, sex education, sexual network patterns, and timing of partnerships all influence STI rates at a population level.
Prominin new insights on stem & cancer stem cell biologySpringer
Prominin-2 is a pentaspan membrane glycoprotein that is structurally related to but encoded by a distinct gene from prominin-1. Both prominins are selectively concentrated in plasma membrane protrusions through direct interaction with membrane cholesterol. However, prominin-2 differs from prominin-1 in having a nonpolarized distribution across the apical, basal, and lateral membranes of polarized epithelial cells. Prominin-2 is also released in small extracellular vesicles from both the apical and basolateral surfaces, unlike prominin-1 which is only released apically. Insights into the distinctive roles of prominin-1 and prominin-2 may be gained by analyzing their evolutionary history and characteristics in other species.
This document discusses nanomaterials for biosensors and implantable biodevices. It describes how nanostructured thin films have enabled the development of more sensitive electrochemical biosensors by increasing detection of specific molecules. Two common techniques for creating nanostructured thin films are discussed - Langmuir-Blodgett films and layer-by-layer films using polyelectrolytes. These techniques allow control over film thickness at the nanoscale and have been used with various biomolecules to create biosensors for applications like glucose detection. Recent advances include using these materials and techniques to create miniaturized and implantable biosensors for real-time monitoring.
This document discusses laser drilling of Inconel alloys and assessing the quality of drilled holes. It presents the results of a study that used a factorial design experiment to investigate the effects of laser output energy, pulse length, focus setting, drilling ambient pressure, and workpiece thickness on hole geometry. Both qualitative and quantitative analyses were performed on the drilled holes. Key findings from the qualitative analysis included that ambient pressure had a significant effect on the amount of resolidified material around holes. The interactions of pressure-thickness and pressure-focus setting were also found to significantly impact hole features.
Anesthesia and perioperative care for aortic surgerySpringer
Acute aortic syndrome describes life-threatening conditions of the thoracic aorta including aortic dissection, intramural hematoma, and penetrating atherosclerotic ulcer. These conditions involve acute lesions of the aortic tunica media and can be distinguished by imaging techniques. Recent advances in imaging and treatment have increased awareness of these conditions and emphasized the importance of early diagnosis and management. The most common cause is aortic dissection, which occurs when blood tears the intimal layer, separating the media into an inner and outer layer. Other causes include intramural hematoma where blood enters the media without an intimal tear, and penetrating ulcers where atherosclerotic lesions erode the intima and media. Proper
This chapter introduces discrete and continuous dynamical systems through examples. Discrete examples include rotations and expanding maps of the circle, as well as endomorphisms and automorphisms of the torus. Continuous examples include flows generated by autonomous differential equations. Periodic points are also defined and analyzed for specific examples. Basic constructions for building new dynamical systems from existing ones are described.
This document discusses the use of cardiac magnetic resonance imaging (cMRI) in catheter-based radiofrequency ablation procedures for cardiac arrhythmias. It begins by providing background on the evolution of ablation procedures from initial fluoroscopy-based navigation to current electroanatomical mapping systems combined with imaging modalities like cMRI. It then focuses on specific applications of cMRI for atrial fibrillation and ventricular tachycardia ablation, including assessing anatomy, characterizing tissue, integrating with mapping systems, and evaluating safety. Details are provided on cMRI protocols, quantification of fibrosis, and using cMRI to stage atrial fibrillation and predict ablation outcomes.
This preface discusses the author's journey into physics and mathematics. As an undergraduate physics student, the author struggled to understand quantum mechanics but found that formalizing concepts mathematically helped. The author later became interested in quantum field theory and continued refining this textbook over many years by teaching students and incorporating their feedback. The preface thanks many colleagues for their role in improving the work and translating it to English.
This document summarizes Chapter 2 of a textbook on functional analysis in mechanics. It introduces Sobolev spaces, which are function spaces used to model mechanical problems. Sobolev spaces allow for generalized notions of derivatives of functions. The chapter discusses imbedding theorems for Sobolev spaces, which describe how functions in one Sobolev space can be mapped continuously or compactly to other function spaces. It provides examples of imbedding properties for specific Sobolev spaces over different domains.
Time domain finite element methods for maxwell's equations in metamaterialsSpringer
This section introduces discontinuous Galerkin (DG) methods for solving time-dependent Maxwell's equations in dispersive media and metamaterials. It first briefly reviews DG methods and their application to Maxwell's equations in free space. It then discusses several DG methods that have been developed for Maxwell's equations in dispersive media, including analyses of error estimates. Finally, it presents the modeling equations for electromagnetic propagation in cold plasma and outlines a semi-discrete DG scheme for solving these equations.
Information quality and management accountingSpringer
This document discusses management accounting and provides definitions of key terms:
1) It defines management accounting as collecting, recording, and reporting accounting and statistical data to decision makers, and outlines its roles in providing information.
2) It distinguishes between management accounting data, which are raw unstructured facts, and management accounting information, which is processed data presented in a useful way for decision makers.
3) Management accounting information can facilitate decisions by informing them, and can also influence decisions by shaping agendas and issues.
1) The document discusses generating random numbers with specified distributions for use in simulations and finance modeling.
2) It describes how linear congruential generators are commonly used to generate uniformly distributed random numbers by calculating values modulo a large integer.
3) Quality requirements for random number generators include having a long period before repeating, passing statistical tests for the desired distribution, and being uniformly distributed in multi-dimensional spaces without clustering along hyperplanes.
This document summarizes various congenital abnormalities and malformations that can occur in the spleen, including accessory spleens, intrapancreatic spleens, polysplenia, splenosis, and splenogonadal fusion. Accessory spleens are small nodules of splenic tissue that are present in around 10% of the population. Intrapancreatic spleens are rare and involve small fragments of splenic tissue entrapped within the pancreas. Polysplenia involves the spleen developing as separate lobes that do not fully fuse. Splenosis occurs after splenic trauma when fragments of splenic tissue implant elsewhere in the abdomen. Splenogonadal fusion is a rare condition where
This chapter discusses the essential requirements for designing a friction material composite system for brakes. It provides calculations for torque, braking ratio, and inertia that are needed as inputs for designing the friction material. Designing the material involves considering its coefficient of friction, stability over operating conditions, and other factors. The original equipment manufacturer would provide a design drawing with critical dimensions and specifications to use for testing and approval of the friction material design.
This document provides information about the Simulation Modelling & Analysis laboratory course for the 6th semester Industrial Engineering program. It includes 12 exercises that involve using simulation software like Arena and Microsoft Excel to model and analyze systems like queues, production lines, inventory systems, and networks. The exercises involve generating random data, fitting distributions, building simulation models, running simulations, and analyzing output statistics over multiple replications. The goal of the laboratory course is for students to develop simulation models and apply simulation methodology to solve industrial engineering problems.
Balancing WIP and Throughput with Machine Utilization in a Manufacturing Faci...IDES Editor
Lean manufacturing is a philosophy with a series of
techniques for any manufacturing company to get to the top of
production performance and to stay there. But for many still,
the teething pains of change and a steep climb are too much to
bear and sustain. There are a number of weak points in a
company that makes it difficult if not impossible to achieve the
promised gains for the efforts they put in. To implement lean
requires careful understanding and application of the concepts
of throughput, cycle time, work-in-process, and machine
utilization. This paper discusses some of the basic lean concepts
related to the throughput, cycle time, and WIP that companies
should follow when going through the transition to a lean
production system.
This document summarizes the results of simulating a manufacturing system model with three parts: arrival of components, processing of components, and the main production process. The simulation found a bottleneck at machine 2, with the longest queue times. Several proposals to improve system performance were explored using a process analyzer tool, with scenario 2 found to spread queue times more evenly and reduce bottlenecks at a reasonable cost, and scenario 5 found to give the best results by further reducing wait times and balancing utilization across resources.
This document presents a mathematical model for analyzing a generic single channel, multi-phase production line. The model aims to minimize system costs by reducing idle machine times and work-in-process inventory levels between machines. The model accounts for machine cycle times and calculates the times at which products enter and exit each machine in the production line. It assumes deterministic arrival rates and develops equations to determine the optimal level of service to minimize the total expected costs of providing service and of waiting for service.
Manufacturing models and metrics are used to quantify performance characteristics. Key metrics include production rate, cycle time, capacity, utilization, availability, lead time, and work-in-process. Mathematical models describe how these metrics are calculated for different production environments like batch, job shop, and flow production. Cost models classify costs as fixed or variable to determine total cost. Overhead rates are also calculated to determine hourly equipment and labor costs. Averaging procedures are described for calculating grand averages when multiple products are produced.
This document discusses worker-machine systems and different types of automation. It defines worker-machine systems as those involving a worker operating powered equipment. The key strengths of humans and machines in such systems are described. Different types of powered equipment, classifications, numbers of workers/machines, and levels of operator attention are outlined. The document provides examples of calculating cycle times, work requirements, and the effects of factors like learning, efficiency, defects, and availability. Overall it provides an overview of analyzing worker-machine systems and determining resource needs.
Lab 3 of 7 Process Management Simulation L A B O V E R V I E W.docxfestockton
Lab 3 of 7: Process Management Simulation
L A B O V E R V I E W
Scenario/Summary
Process Management Simulation (Part 3 of 3)
The objective of this three section lab is to simulate four process management functions:
process creation, replacing the current process image with a new process image, process state transition,
and
process scheduling
.
This lab will be due over the first three weeks of this course. The commander process program is due in Week 1. This program will introduce the student to system calls and other basic operating system functions. The process manager functions
“
process creation
,
replacing the current process image with a new process image
and
process state transition
“
are due in Week 2. The scheduling section of the process manager is due in Week 3.
You will use Linux system calls such as fork( ), exec(), wait( ), pipe( ), and sleep( ). Read man pages of these system calls for details.
This simulation exercise consists of three processes running on a Linux environment:
commander
,
process manager
, and
reporter
. There is one commander process (this is the process that starts your simulation), one process manager process that is created by the commander process, and a number of reporter processes that get created by the process manager, as needed.
1. Commander Process:
The commander process first creates a pipe and then the process manager process. It then repeatedly reads commands from the standard input and passes them to the process manager process via the pipe. The commander process accepts four commands:
1.
Q
: End of one unit of time.
2.
U
: Unblock the first simulated process in blocked queue.
3.
P
: Print the current state of the system.
4.
T
: Print the average turnaround time, and terminate the system.
Command
T
can only be executed once.
1.1 Simulated Process:
Process management simulation manages the execution of simulated processes. Each simulated process is comprised of a program that manipulates the value of a single integer variable. Thus the state of a simulated process at any instant is comprised of the value of its integer variable and the value of its program counter.
A simulated process™ program consists of a sequence of instructions. There are seven types of instructions as follows:
1.
S
n: Set the value of the integer variable to n, where n is an integer.
2.
A
n: Add n to the value of the integer variable, where n is an integer.
3.
D
n: Subtract n from the value of the integer variable, where n is an integer.
4.
B
: Block this simulated process.
5.
E
: Terminate this simulated process.
6.
F
n: Create a new (simulated) process. The new (simulated) process is an exact copy of the parent (simulated) process. The new (simulated) process executes from the instruction immediately after this (
F
) instruction, while the parent (simulated) process continues its execution n instructions after the next instruction.
7.
R
filename: Replace the program of .
This document provides an introduction to simulation. It begins with an example of simulating a one-teller bank manually to illustrate modeling concepts. Key concepts discussed include the definition of simulation, modeling concepts such as systems, events, entities and attributes. The document also discusses the advantages of simulation in allowing users to ask "what if" questions and analyze system behavior without making costly changes to real systems. It notes both computer simulation and manual simulation have limitations in complexity. Overall the document provides a high-level overview of simulation methodology and concepts.
1) The document describes a case study of improving the productivity of an automotive engine assembly line through line balancing techniques.
2) Initially, the assembly line had 39 workstations and produced 225 engines per shift. Through time studies and rearranging activities across workstations, non-value added activities were reduced and processing times were equalized to the takt time of 90 seconds.
3) After implementing line balancing improvements like reducing motions and rearranging activities, productivity increased 16% to 262 engines per shift while maintaining the same number of workers. The line efficiency also improved from 64.19% initially to 71.43% after balancing.
This document discusses simulation examples of queueing systems. It describes three key steps in simulations: 1) determining input characteristics, 2) constructing a simulation table, and 3) generating input values and evaluating responses over repetitions. It then provides details on simulating a single-channel queue, including modeling arrivals and services as probability distributions and tracking events in a simulation table. The document concludes with an example simulation of customers arriving at and being served by a single checkout counter.
Write a program in C or C++ which simulates CPU scheduling in an opera.pdfsravi07
Write a program in C or C++ which simulates CPU scheduling in an operating system. There is
only one
CPU. The scheduling algorithm you will implement is FCFS. You can implement Round Robin
for extra
credits. You are suggested but not required to use standard template library data structures such
as vector
and deque.
This program is lengthier than previous assignments. Please allocate sufficient time.
Assumptions:
(a) We will assume the processes engage in CPU bursts, Input bursts
and Output bursts, ignoring other interrupts.
(b) We will assume that all processes are doing Input through the
same device which can process one Input burst at a time.
(b) We will assume that all processes are doing Output through the
same device which can process one Output burst at a time.
(c) We will assume the system starts out with no processes active.
There may be processes ready to start at once.
Data structures:
You will need a struct or class to represent one process.
The program requires 4 queues: Entry, Ready, Input and Output. You can use
deque for the queues (unless you prefer to write your own implementation of
queue). The items stored on the queues are pointers to processes. You can
think of Entry queue as the queue where the processes reside (e.g. on disk
swap space) before they are loaded into memory.
You can also have variables Active, IActive and OActive (pointers to
processes), which points to the active processes on the CPU, the input device
and the output device, respectively.
Constants:
You can declare the following constants:
MAX_TIME is an integer = length of the whole run. Use the value 500.
IN_USE is an integer = maximum number of processes that can be in play at
once (that is, Active/IOActive processes if any, plus those that are in
Ready/IO queues). Use the value 5.
2
HOW_OFTEN is an integer indicating how often to reprint the state of
the system. Use the value 25.
You may declare some other optional constants. They are not necessary if you
choose to use STL:
QUEUE_SIZE is an integer guaranteed larger than the maximum number of
items any queue will ever hold. Use the value 20.
ARRAY_SIZE is an integer = size of the array to define in a process. It is
the maximum number of bursts for a process. Use the value 10.
Feel free to add more constants as you see fit.
The process data structure:
A process needs to contain (at least) the following data:
ProcessName is the name of the process, a string.
ProcessID is an integer, the ID number for the process. This is assigned
by the system (i.e., your program). Use consecutive values such as 101,
102, 103, etc.
History is an array or vector of pairs of the form (letter, value). They
are from the supplied input file, described below.
Sub is a subscript into the array/vector History
CPUTimer counts clock ticks for the process until it reaches
the end of the CPU burst for FCFS (or end of the quantum for RR).
IOTimer counts clock ticks for the process until it reaches the end of the
I/O burst. You need t.
This document discusses automated production lines, also called transfer lines or transfer machines. It provides three key points:
1. Automated production lines consist of multiple linked workstations that perform processing operations like machining on parts. Parts are transferred between stations by a mechanized material handling system.
2. Transfer lines are appropriate for high production demand of parts requiring multiple operations, with stable designs and long product lives. They provide benefits like low labor costs and high production rates.
3. Storage buffers between workstations can reduce the impact of breakdowns and allow production to continue. The effectiveness of buffers depends on their capacity, providing some protection even with small buffers but maximum benefits with unlimited capacity buffers.
The document describes a manufacturing process for a new product line consisting of 50 models produced annually. Each model requires 400 components produced through a 6-step process taking 1 minute per step. The factory operates one shift and requires 1000 workers and 250,000 square feet of floorspace based on the production details provided.
The document discusses different types of manufacturing systems. It defines a manufacturing system as a collection of integrated equipment and human resources that performs processing and/or assembly operations on raw materials or parts. There are several types of manufacturing systems classified based on factors like the type of operations, number of workstations, level of automation, and ability to handle product variety. Common types include single workstation systems, multiple workstation systems with variable or fixed routing between stations, and systems that are manually operated, semi-automated, or fully automated.
This document provides specifications for a programming assignment to simulate operating system processes. Students will write a program comparing at least two scheduling algorithms. The simulation will include queues, a CPU, and I/O device. It will process a stream of jobs with various CPU and I/O bursts. Output will include statistics on wait times, throughput, and more to compare algorithm performance. Extra credit options include additional algorithms, preemption, multiple CPUs/devices, and enhanced reporting.
This document discusses a wireless link model simulation code. It examines the code and answers questions about what various parts of the code represent, such as IATM representing the inter-arrival time distribution and SVTM representing the service time distribution. The document also discusses running the simulation with different parameters, examining the results, and determining how increasing the number of arrivals affects convergence of the simulation results to the analytic results.
This document provides information about line balancing for a textile production process. It begins with an introduction to line balancing and definitions. It then discusses specific methods for balancing a production line, including determining the number of operators needed, work-in-process inventory levels, and standard minute values. The document provides examples of time studies, production data collection, and calculating key metrics like pitch time and bottleneck processes. The goal is to design an optimized production flow to improve throughput and reduce costs.
Advanced C Programming
You are required to study and understand the under lying concepts of advanced C used in the examples below. You are also required to compile and execute the programs and capture the output generated by each program
Similar to Decision policies for production networks (20)
The chemistry of the actinide and transactinide elements (set vol.1 6)Springer
Actinium is the first member of the actinide series of elements according to its electronic configuration. Actinium closely resembles lanthanum chemically. The three most important isotopes of actinium are 227Ac, 228Ac, and 225Ac. 227Ac is a naturally occurring isotope in the uranium-actinium decay series with a half-life of 21.772 years. 228Ac is in the thorium decay series with a half-life of 6.15 hours. 225Ac is produced from 233U with applications in medicine.
Transition metal catalyzed enantioselective allylic substitution in organic s...Springer
This document provides an overview of computational studies of palladium-mediated allylic substitution reactions. It discusses the history and development of quantum mechanical and molecular mechanical methods used to study the structures and reactivity of allyl palladium complexes. In particular, density functional theory methods like B3LYP have been widely used to study reaction mechanisms and factors controlling selectivity. Continuum solvation models have also been important for properly accounting for reactions in solvent.
1) Ranchers in Idaho observed lambs born with cyclopia (one eye) due to ewes grazing on corn lily plants. Cyclopamine was identified as the compound responsible and was later found to inhibit the Hedgehog signaling pathway.
2) Nakiterpiosin and nakiterpiosinone were isolated from cyanobacterial sponges and shown to inhibit cancer cell growth. Their unique C-nor-D-homosteroid skeleton presented synthetic challenges.
3) The authors developed a convergent synthesis of nakiterpiosin involving a carbonylative Stille coupling and a photo-Nazarov cyclization. Model studies led them to propose a revised structure for n
This document reviews solid-state NMR techniques that have been used to determine the molecular structures of amyloid fibrils. It discusses five categories of NMR techniques: 1) homonuclear dipolar recoupling and polarization transfer via J-coupling, 2) heteronuclear dipolar recoupling, 3) correlation spectroscopy, 4) recoupling of chemical shift anisotropy, and 5) tensor correlation methods. Specific techniques described include rotational resonance, dipolar dephasing, constant-time dipolar dephasing, REDOR, and fpRFDR-CT. These techniques have provided insights into the hydrogen-bond registry, spatial organization, and backbone torsion angles of amyloid fibrils.
This document discusses principles of ionization and ion dissociation in mass spectrometry. It covers topics like ionization energy, processes that occur during electron ionization like formation of molecular ions and fragment ions, and ionization by energetic electrons. It also discusses concepts like vertical transitions, where electronic transitions occur much faster than nuclear motions. The document provides background information on fundamental gas phase ion chemistry concepts in mass spectrometry.
Higher oxidation state organopalladium and platinumSpringer
This document discusses the role of higher oxidation state platinum species in platinum-mediated C-H bond activation and functionalization. It summarizes that the original Shilov system, which converts alkanes to alcohols and chloroalkanes under mild conditions, involves oxidation of an alkyl-platinum(II) intermediate to an alkyl-platinum(IV) species by platinum(IV). This "umpolung" of the C-Pt bond facilitates nucleophilic attack and product formation rather than simple protonolysis back to alkane. Subsequent work has validated this mechanism and also demonstrated that platinum(IV) can be replaced by other oxidants, as long as they rapidly oxidize the
Principles and applications of esr spectroscopySpringer
- Electron spin resonance (ESR) spectroscopy is used to study paramagnetic substances, particularly transition metal complexes and free radicals, by applying a magnetic field and measuring absorption of microwave radiation.
- ESR spectra provide information about electronic structure such as g-factors and hyperfine couplings by measuring resonance fields. Pulse techniques also allow measurement of dynamic properties like relaxation.
- Paramagnetic species have unpaired electrons that create a magnetic moment. ESR detects transition between spin energy levels induced by microwave absorption under an applied magnetic field.
This document discusses crystal structures of inorganic oxoacid salts from the perspective of periodic graph theory and cation arrays. It analyzes 569 crystal structures of simple salts with the formulas My(LO3)z and My(XO4)z, where M are metal cations, L are nonmetal triangular anions, and X are nonmetal tetrahedral anions. The document finds that in about three-fourths of the structures, the cation arrays are topologically equivalent to binary compounds like NaCl, NiAs, and FeB. It proposes representing these oxoacid salts as a quasi-binary model My[L/X]z, where the cation arrays determine the crystal structure topology while the oxygens play a
Field flow fractionation in biopolymer analysisSpringer
This document summarizes a study that uses flow field-flow fractionation (FlFFF) to measure initial protein fouling on ultrafiltration membranes. FlFFF is used to determine the amount of sample recovered from membranes and insights into how retention times relate to the distance of the sample layer from the membrane wall. It was observed that compositionally similar membranes from different companies exhibited different sample recoveries. Increasing amounts of bovine serum albumin were adsorbed when the average distance of the sample layer was less than 11 mm. This information can help establish guidelines for flow rates to minimize fouling during ultrafiltration processes.
1) The document discusses phonons, which are quantized lattice vibrations in crystals that carry thermal energy. It describes modeling crystal vibrations using a harmonic lattice approach.
2) Normal modes of the lattice vibrations can be described as a set of independent harmonic oscillators. Quantum mechanically, these normal modes are quantized as phonons with discrete energy levels.
3) Phonons can be thought of as quasiparticles that carry momentum and energy in the crystal lattice. Their propagation is described using a phonon field approach rather than independent normal modes.
This chapter discusses 3D electroelastic problems and applied electroelastic problems. For 3D problems, it presents the potential function method for solving problems involving a penny-shaped crack and elliptic inclusions. It derives the governing equations and introduces potential functions to obtain the general static and dynamic solutions. For applied problems, it discusses simple electroelastic problems, laminated piezoelectric plates using classical and higher-order theories, and piezoelectric composite shells. It also presents a unified first-order approximate theory for electro-magneto-elastic thin plates.
Tensor algebra and tensor analysis for engineersSpringer
This document discusses vector and tensor analysis in Euclidean space. It defines vector- and tensor-valued functions and their derivatives. It also discusses coordinate systems, tangent vectors, and coordinate transformations. The key points are:
1. Vector- and tensor-valued functions can be differentiated using limits, with the derivatives being the vector or tensor equivalent of the rate of change.
2. Coordinate systems map vectors to real numbers and define tangent vectors along coordinate lines.
3. Under a change of coordinates, components of vectors and tensors transform according to the Jacobian of the coordinate transformation to maintain geometric meaning.
This document provides a summary of carbon nanofibers:
1) Carbon nanofibers are sp2-based linear filaments with diameters of around 100 nm that differ from continuous carbon fibers which have diameters of several micrometers.
2) Carbon nanofibers can be produced via catalytic chemical vapor deposition or via electrospinning and thermal treatment of organic polymers.
3) Carbon nanofibers exhibit properties like high specific area, flexibility, and strength due to their nanoscale diameters, making them suitable for applications like energy storage electrodes, composite fillers, and bone scaffolds.
Shock wave compression of condensed matterSpringer
This document provides an introduction and overview of shock wave physics in condensed matter. It discusses the assumptions made in treating one-dimensional plane shock waves in fluids and solids. It briefly outlines the history of the field in the United States, noting that accurate measurements of phase transitions from shock experiments established shock physics as a discipline and allowed development of a pressure calibration scale for static high pressure work. It describes some of the practical applications of shock wave experiments for providing high-pressure thermodynamic data, understanding explosive detonations, calibrating pressure scales, and enabling studies of materials under extreme conditions.
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solidsSpringer
This document discusses the quantum electrodynamics approach to describing bremsstrahlung, or braking radiation, of a fast charged particle colliding with an atom. It derives expressions for the amplitude of bremsstrahlung on a one-electron atom within the first Born approximation. The amplitude has static and polarization terms. The static term corresponds to radiation from the incident particle in the nuclear field, reproducing previous results. The polarization term accounts for radiation from the atomic electron and contains resonant denominators corresponding to intermediate atomic states. The full treatment allows various limits to be taken, such as removing the nucleus or atomic electron, reproducing known results from quantum electrodynamics.
Nanostructured materials for magnetoelectronicsSpringer
This document discusses experimental approaches to studying magnetization and spin dynamics in magnetic systems with high spatial and temporal resolution.
It describes using time-resolved X-ray photoemission electron microscopy (TR-XPEEM) to image the temporal evolution of magnetization in magnetic thin films with picosecond time resolution. Results are presented showing the changing domain structure in a Permalloy thin film following excitation with a magnetic field pulse. Different rotation mechanisms are observed depending on the initial orientation of the magnetization with respect to the applied field.
A novel pump-probe magneto-optical Kerr effect technique using higher harmonic generation is also discussed for addressing spin dynamics in magnetic systems with femtosecond time resolution and element selectivity.
This document discusses nanomaterials for biosensors and implantable biodevices. It describes how nanostructured thin films have enabled the development of more sensitive electrochemical biosensors by improving the detection of specific molecules. Two common techniques for creating nanostructured thin films are described - Langmuir-Blodgett films and layer-by-layer films. These techniques allow for the precise control of film thickness at the nanoscale and have been used to immobilize biomolecules like enzymes to create biosensors. Recent research is also exploring how these nanostructured films and biomolecules can be used to create implantable biosensors for real-time monitoring inside the body.
Modern theory of magnetism in metals and alloysSpringer
This document provides an introduction to magnetism in solids. It discusses how magnetic moments originate from electron spin and orbital angular momentum at the atomic level. In solids, electron localization determines whether magnetic properties are described by localized atomic moments or collective behavior of delocalized electrons. The key concepts of metals and insulators are introduced. The document then presents the basic Hamiltonian used to describe magnetism in solids, including terms for kinetic energy, electron-electron interactions, spin-orbit coupling, and the Zeeman effect. It also discusses how atomic orbitals can be used as a basis set to represent the Hamiltonian and describes the symmetry properties of s, p, and d orbitals in cubic crystals.
This chapter introduces and classifies various types of damage that can occur in structures. Damage can be caused by forces, deformations, aggressive environments, or temperatures. It can occur suddenly or over time. The chapter discusses different damage mechanisms including corrosion, excessive deformation, plastic instability, wear, and fracture. It also introduces concepts that will be covered in more detail later such as damage mechanics, fracture mechanics, and the influence of microstructure on damage and fracture. The chapter aims to provide an overview of damage types before exploring specific mechanisms and analyses in later chapters.
This document summarizes research on identifying spin-wave eigen-modes in a circular spin-valve nano-pillar using Magnetic Resonance Force Microscopy (MRFM). Key findings include:
1) Distinct spin-wave spectra are observed depending on whether the nano-pillar is excited by a uniform in-plane radio-frequency magnetic field or by a radio-frequency current perpendicular to the layers, indicating different excitation mechanisms.
2) Micromagnetic simulations show the azimuthal index φ is the discriminating parameter, with only φ=0 modes excited by the uniform field and only φ=+1 modes excited by the orthogonal current-induced Oersted field.
3) Three indices are used to label resonance
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
2. 10 E. Lefeber
w [lots]
w [lots]
δ [lots/hour]
δ [lots/hour]factory
level
machine
level
ϕ [hours]
ϕ [hours]
buffer
t0 [hours]
Fig.1 Basic quantities for manufacturing systems
Raw process time t0 of a lot denotes the net time a machine needs to process
the lot. This process time excludes additions such as setup
time, breakdown, or other sources that may increase the
time a lot spends in the machine. The raw process time
is typically measured in hours or minutes.
Throughput δ denotes the number of lots per unit time that leaves the
manufacturing system. At a machine level, this denotes
the number of lots that leave a machine per unit time. At
a factory level it denotes the number of lots that leave the
factory per unit time. The unit of throughput is typically
lots/hour.
Flow time ϕ denotes the time a lot is in the manufacturing system. At
a factory level this is the time from the release of the lot
into the factory until the finished lot leaves the factory. At
a machine level this is the time from entering the machine
(or the buffer in front of the machine) until leaving the
machine. Flow time is typically measured in days, hours,
or minutes. Instead of flow time the words cycle time and
throughput time are also commonly used.
Work in process (wip) w denotes the total number of lots in the manufacturing sys-
tem, i.e., in the factory or in the machine. Wip is measured
in lots.
Utilization u denotes the fraction of time that a machine is not idle.
A machine is considered idle if it could start processing
a new lot. Thus process time as well as downtime, setup-
time and preventive maintenance time all contribute to the
utilization. Utilization has no dimension and can never
exceed 1.0.
3. Modeling and Control of Manufacturing Systems 11
0 0.2 0.4 0.6 0.8 1
0
2
4
6
8
10
0 0.2 0.4 0.6 0.8 1
0
2
4
6
8
10
Throughput (δ)Throughput(δ)
Wip()
Flowtime(ϕ)
Fig.2 Basic relations between basic quantities for manufacturing systems
Ideally, a manufacturing system should have both a high throughput and a low
flow time or low wip. Unfortunately, these goals are conflicting (cf. Fig.2) and can
not both be met simultaneously. If a high throughput is required, machines should
always be busy. As from time to time disturbances like machine failures happen,
buffers between two consecutive machines are required to make sure that the second
machine can still continue if the first machine fails (or vice versa). Therefore, for a
high throughput many lots are needed in the manufacturing system, i.e., wip needs
to be high. As a result, if a new lot starts in the system it has a large flow time, since
all lots that are currently in the system need to be completed first.
Conversely, the least possible flow time can be achieved if a lot arrives at a
completely empty system and never has to wait before processing takes place. As
a result, the wip level is small. However, for most of the time machines are not
processing, yielding a small throughput.
When trying to control manufacturing systems, a trade-off needs to be made
between throughput and flow time, so the nonlinear (steady state) relations depicted
in Fig.2 need to be incorporated in any reasonable model of manufacturing systems.
We return to this in Sect.4.1 when discussing clearing functions.
1.2 Analytical Models for Steady-State Analysis
In order to get some insights in the steady-state performance of a given manufac-
turing system simple relations can be used. We first deal with mass conservation
for determining the mean utilization of workstations and the number of machines
required for meeting a required throughput. Furthermore, relations from queueing
theory are used to obtain estimates for the mean wip and mean flow time.
4. 12 E. Lefeber
B1
t0[hrs]
λ
machine
M1
M2
M3
M4
2.0
6.0
1.8
1.6
0.9
0.1
0.2
0.8
δ
M1 B2 M2
M3 M4B3
Fig.3 Manufacturing system with rework and bypassing
1.2.1 Mass Conservation (Throughput)
Using mass conservation the mean utilization of workstations can easily be deter-
mined.
Example 1 Consider the manufacturing system with rework and bypassing in Fig.3.
The manufacturing system consists of three buffers and four machines. Lots are
released at a rate of λ lots/hour. The numbers near the arrows indicate the fraction of
the lots that follow that route. For instance, of the lots leaving buffer B1 90% goes to
machine M1 and 10% goes to buffer B3. The process time of each machine is listed
in the table in Fig.3.
Let δMi and δBi denote the throughput of machine Mi (i = 1, 2, 3, 4) and buffer
Bi (i = 1, 2, 3), respectively. Using mass conservation we obtain
δM1 = 0.9δB1 δB1 = λ
δM2 = 0.2δB2 δB2 = δM1 + δM2
δM3 = 0.8δB2 δB3 = δM3 + 0.1δB1
δM4 = δB3 δ = δM4 .
Solving these linear relations results in:
δM1 = 0.9λ δB1 = λ
δM2 = 0.225λ δB2 = 1.125λ
δM3 = 0.9λ δB3 = λ
δM4 = λ δ = λ.
Using the process times of the table in Fig.3, we obtain for the utilizations:
uM1 = 0.9λ · 2.0/1 = 1.8λ uM3 = 0.9λ · 1.8/1 = 1.62λ
uM2 = 0.225λ · 6.0/1 = 1.35λ uM4 = λ · 1.6/1 = 1.6λ.
Machine M1 has the highest utilization, therefore it is the bottleneck and the maximal
throughput for this line is λ = 1/1.8 = 0.56 lots per hour.
Using mass conservation, utilizations of workstations can be determined straight-
forwardly. This also provides a way for determining the number of machines required
for meeting a given throughput. By modifying the given percentages the effect of
rework or a change in product mix can also be studied.
5. Modeling and Control of Manufacturing Systems 13
Fig.4 Single machine
workstation
td
t0, c0
2
B∞
Mta, ca
2
1.2.2 Queueing Relations (Wip, Flow time)
For determining a rough estimate of the corresponding mean flow time and mean
wip, basic relations from queueing theory can be used.
Consider a single machine workstation that consists of infinite buffer B∞ and
machine M (see Fig.4). Lots arrive at the buffer with a stochastic interarrival time.
The interarrival time distribution has mean ta and a standard deviation σa which we
characterize by the coefficient of variation ca = σa/μa. The machine has stochastic
process times, with mean process time t0 and coefficient of variation c0. Finished lots
leave the machine with a stochastic interdeparture time, with mean td and coefficient
of variation cd. Assuming independent interarrival times and independent process
times, the mean waiting time ϕB in buffer B can be approximated for a stable system
by means of Kingman’s equation [10]:
ϕB =
c2
a + c2
0
2
·
u
1 − u
· t0 (1)
with the utilization u defined by: u = t0/ta. Equation1 is exact for an M/G/1 system,
i.e., a single machine workstation with exponentially distributed interarrival times
and any distribution for the process time. For other single machine workstations it is
an approximation.
For a stable system, we have td = ta. We can approximate the coefficient of
variation cd by Kuehn’s linking equation [11]:
c2
d = (1 − u2
) · c2
a + u2
· c2
0. (2)
This result is exact for an M/M/1 system. For other single machine workstations it
is an approximation. Having characterized the departure process of a workstation,
the arrival process at the next workstation has been characterized as well. As a result,
a line of workstations can also be described.
Example 2 (Three workstations in series) Consider the three workstation flow line
in Fig.5. For the interarrival time at workstation 0 we have ta = 4.0 h and c2
a = 1.
The three workstations are identical with respect to the process times: t0,i = 3.0 h
for i = 0, 1, 2 and c2
0,i = 0.5 for i = 0, 1, 2. We want to determine the mean total
flow time per lot.
Since ta > t0,i for i = 0, 1, 2, we have a stable system and ta,i = td,i = 4.0 h
for i = 0, 1, 2. Subsequently, the utilization for each workstation is ui = 3.0/4.0 =
0.75 for i = 0, 1, 2.
Using (1) we calculate the mean flow time for workstation 0
6. 14 E. Lefeber
cd,1
= ca,2
cd,0
= ca,1
M0
t0,0
, c0,0
∞
t0,1
, c0,1
td,0 = ta,1
1
M2
t0,2
, c0,2
td,1= ta,2
B ∞B ∞B
ca,0
ta,0
cd,2
td,2
M
Fig.5 Three workstation flow line
ϕ0 = ϕB+t0 =
c2
a + c2
0
2
·
u
1 − u
·t0+t0 =
1 + 0.5
2
·
0.75
1 − 0.75
·3.0+3.0 = 9.75 h.
Using (2), we determine the coefficient of variation on the interarrival time ca,1
for workstation W1
c2
a,1 = c2
d,0 = (1 − u2
) · c2
a + u2
· c2
0 = (1 − 0.752
) · 1 + 0.752
· 0.5 = 0.719
and the mean flow time for workstation 1
ϕ1 =
0.719 + 0.5
2
·
0.75
1 − 0.75
· 3.0 + 3.0 = 8.49 h.
In a similar way, we determine that c2
a,2 = 0.596, ϕ2 = 7.93 h. We then calculate
the mean total flow time to be
ϕtot = ϕ0 + ϕ1 + ϕ2 = 26.2 h.
Note that the minimal flow time without variability (c2
a = c2
0,i = 0) equals 9.0h.
Equations 1 and 2 are particular instances of a workstation consisting of a
single machine. For workstations consisting of m identical machines in parallel the
following approximations can be used [8, 16]:
ϕB =
c2
a + c2
0
2
·
u
√
2(m+1)−1
m(1 − u)
· t0 (3)
c2
d = (1 − u2
) · c2
a + u2
·
c2
0 +
√
m − 1
√
m
, (4)
where the utilization u = t0/(m · ta). Notice that in case m = 1 these equations
reduce to (1) and (2).
Once the mean flow time has been determined, a third basic relation from queueing
theory, Little’s law [14], can be used for determining the mean wip level. Little’s law
states that the mean wip level (number of lots in a manufacturing system) w is equal
to the product of the mean throughput δ and the mean flow time ϕ, provided the
system is in steady state:
w = δ · ϕ. (5)
7. Modeling and Control of Manufacturing Systems 15
An example illustrates how Kingman’s equation and Little’s law can be used.
Example 3 Consider the system of Example 2 as depicted in Fig.5. From
Example 2 we know that the flow times for the three workstations are respectively
ϕ0 = 9.75 h, ϕ1 = 8.49 h, ϕ2 = 7.93 h.
Since the steady-state throughput was assumed to be δ = 1/ta = 1/4.0 = 0.25
lots/hour, we obtain via Little’s law
w0 = 0.25 · 9.75 = 2.44 lots,
w1 = 0.25 · 8.49 = 2.12 lots,
w2 = 0.25 · 7.93 = 1.98 lots.
The above mentioned relations are simple approximations that can be used for
getting a rough idea about the possible performance of a manufacturing system.
These approximations are fairly accurate for high utilizations but less accurate for
lower degrees of utilization. A basic assumption when using these approximations
is the independence of the interarrival times, which in general is not the case, e.g.,
for merging streams of lots. Furthermore, using these equations only steady state
behavior can be analyzed. For studying things like ramp-up behavior or for incor-
porating more details like operator behavior, more sophisticated models are needed,
as described next.
1.3 Discrete Event Models
A final observation of relevance for modeling manufacturing systems is the nature
of the system signals. In Fig.6a characteristic graph of the wip at a workstation as
a function of time is shown. Wip always takes integer values with arbitrary (non-
negative real) duration. One could consider a manufacturing system to be a system
that takes values from a finite set of states and jumps from one state to the other as
time evolves. This jump from one state to the other is called an event. As we have a
countable (discrete) number of states, the name of this class of models is explained.
Manufacturing systems can be modeled as a network of concurrent processes. For
example, a buffer is modeled as a process that as long as it can store something is
willing to receive new products, and as long as it has something stored is willing
to send products. A basic machine is modeled as a process that waits to receive a
product; upon receipt it holds the product for a specified amount of time (delay).
Upon completion, the machine tries to send the product to the next buffer in the
manufacturing line. The machine keeps on doing these three consecutive things. The
delay used is often a sample from some distribution.
In particular in the design phase discrete event models are used. These discrete
event models usually contain a detailed description of everything that happens in
the manufacturing system under consideration, resulting into large models. Since in
8. 16 E. Lefeber
0 5 10 15 20 25
0
2
4
6
8
10
time
wip
Fig.6 A characteristic time-behavior of wip at a workstation
practice manufacturing systems are changing continuously, it is very hard to keep
these discrete event models up-to-date [4].
Fortunately, for a manufacturing system in operation it is possible to arrive at
more simple/less detailed discrete event models by using the concept of Effective
Process Times (EPTs) as discussed in the next section.
2 Effective Process Times (EPTs)
For the processing of a lot at a machine, many steps may be required. For example,
it could be that an operator needs to get the lot from a storage device, setup a specific
tool that is required for processing the lot, put the lot on an available machine, start
a specific program for processing the lot, wait until this processing has finished
(meanwhile doing something else), inspect the lot to determine if all went well,
possibly perform some additional processing (e.g., rework), remove the lot from the
machine and put it on another storage device and transport it to the next machine.
At all of these steps something might go wrong: the operator might not be available,
after setting up the machine the operator finds out that the required recipe cannot be
run on this machine, the machine might fail during processing, no storage device is
available anymore so the machine cannot be unloaded and is blocked, etc.
Even though one might build a discrete event model including all these
details, it is impossible to measure all sources of variability that might occur in
a manufacturing system. One might measure some of them and incorporate these in
a discrete event model. The number of operators and tools can be modeled explicitly
and it is common practice to collect data on mean times to failure and mean times to
repair of machines. Also schedules for (preventive) maintenance can be incorporated
explicitly in a discrete event model. Nevertheless, still not all sources of variability
are included. This is clearly illustrated in Fig.7, obtained from [9]. The left graph
contains actual realizations of flow times of lots leaving a real manufacturing system,
whereas the right graph contains the results of a detailed discrete event simulation
model including stochasticity. It turns out that in reality flow times are much higher
9. Modeling and Control of Manufacturing Systems 17
Fig.7 A comparison
and much more irregular than simulation predicts. So, even if one endeavors to cap-
ture all variability present in a manufacturing system, still the outcome predicted by
the model is far from reality.
Hopp and Spearman [8] use the term Effective Process Time (EPT) as the time seen
by lots from a logistical point of view. In order to determine this Effective Process
Time, Hopp and Spearman assume that the contribution of the individual sources of
variability is known.
Instead of taking the bottom-up view of Hopp and Spearman, a top-down approach
can also be taken, as shown by Jacobs et al. [9], where algorithms have been intro-
duced that enable determination of Effective Process Time realizations from a list of
events. For these algorithms, the basic idea of the Effective Process Time to include
time losses was used as a starting point.
To illustrate this approach, we first deal with a workstation consisting of a single
machine, serving one lot type, using a First In First Out (FIFO) policy. Then we deal
with the more general case.
2.1 Single Machine, One Lot Type, FIFO Policy
Consider a workstation consisting of a single machine, serving one lot type, using a
First In First Out (FIFO) policy. Let the Gantt chart of Fig.8 depict what happened
at this workstation during a certain time interval. At t = 0 the first lot arrives at the
workstation. After a setup, the processing of the lot starts at t = 2 and is completed at
t = 6. At t = 4 the second lot arrives at the workstation. At t = 6 this lot could have
been started, but apparently no operator was available, so only at t = 7 the setup for
this lot starts. Eventually, at t = 8 the processing of the lot starts and is completed at
t = 12. The fifth lot arrives at the workstation at t = 22, processing starts at t = 24,
but at t = 26 the machine breaks down. It takes until t = 28 before the machine
has been repaired and the processing of the fifth lot continues. The processing of the
fifth lot is completed at t = 30.
10. 18 E. Lefeber
Fig.8 Gantt chart of 5 lots at
a single machine workstation
0 5 10 15 20 25 30
Machine breakdown
Waiting for operator
Queueing
Processing
Setup
Legend
Fig.9 EPT realizations of
5 lots at a workstation
0 5 10 15 20 25 30
EPT 1 EPT 2 EPT 3 EPT 4 EPT 5
Machine breakdown
Waiting for operator
Queueing
Processing
Setup
Legend
If we take the point of view of a lot, what does a lot see from a logistical point of
view? The first lot arrives at an empty system at t = 0 and departs from this system
at t = 6. From the point of view of this lot, its processing took 6 time-units. The
second lot arrives at a non-empty system at t = 4. Clearly, this lot needs to wait.
However, at t = 6, if we forget about the second lot, the system becomes empty
again. So from t = 6 on the second lot does not need to wait anymore. At t = 12 the
second lot leaves the system, so from the point of view of this lot, its processing took
from t = 6 till t = 12; the lot does not know whether waiting for an operator and a
setup is part of its processing. Similarly, the third lot sees no need for waiting after
t = 12 and leaves the system at t = 17, so it assumes to have been processed from
t = 12 till t = 17. Following this reasoning, the resulting Effective Proces Times for
lots are as depicted in Fig.9. Notice that only arrival and departure events of lots to
a workstation are needed for determining the Effective Process Times. Furthermore,
none of the contributing disturbances needs to be measured.
In highly automated manufacturing systems, arrival and departure events of lots
are being registered, so for these manufacturing systems, Effective Process Time
realizations can be determined rather easily. Next, these EPT realizations can be used
in a relatively simple discrete event model of the manufacturing system. This discrete
event model only contains the architecture of the manufacturing system, buffers
and machines. The process times of these machines are samples from their EPT-
distribution as measured from real manufacturing data. Machine failures, operators,
etc., do not need to be included as this is all included in the EPT-distributions.
Furthermore, the algorithms as provided in [9] are utilization independent. That
is, data collected at a certain throughput rate is also valid for different throughput
rates. Furthermore, since EPT-realizations characterize operational time variability,
they can be used for performance measuring. For more on this issue, the interested
reader is referred to [9].
Recently, the above mentioned EPT-model has been generalized. This general-
ization is presented next.
11. Modeling and Control of Manufacturing Systems 19
(a) (b)
Fig.10 a An example of a workstation. b The proposed aggregate model
2.2 Integrated Processing Workstations
Consider an integrated processing workstation consisting of m identical parallel
machines, each of which have l sequential integrated processes, cf. Fig.10. We
replace the model of this workstation with a much simpler model, which is not
a true physical server anymore, i.e., the structure of the aggregate model differs
significantly from the real workstation. Nevertheless, the input/output behavior of
the aggregate model closely resembles the input/output behavior of the worksta-
tion it models. Lots arrive according to some arrival process to the queue of the
aggregate model. Lot i is defined as the ith arriving lot in this queue. The queue
consists of all lots that are currently in the system, including lots that are (supposed
to be) in process. Therefore, the queue is not a queue as in common queue-server
models. Lots are not physically processed, i.e., during “processing” lots stay in the
queue. Processing is modeled as a timer that determines when the next lot leaves the
queue. When the timer expires, i.e., the “process time” has elapsed, the lot that is
currently first in the queue leaves the system. Upon arrival of a new lot i, it is deter-
mined how many of the lots already present in the queue w are overtaken by lot i.
The number of lots to overtake K ∈ {0, 1, . . . , w} is sampled from a probability
distribution which depends on the number of lots w in the queue just before lot i
arrives. The arriving lot is placed on position w−K in the queue, where position 0
corresponds with the head of the queue. The timer starts when either a lot arrives to
an empty system, or a lot departs while leaving one or more lots behind. The duration
of the “process time” is sampled from a distribution which depends on the number of
lots w in the queue just after the timer starts, i.e., including a possibly newly arrived
lot. We model the server as a timer to allow newly arriving lots to overtake all lots
in the system while the timer is running. We need this to model the possibility that
a lot which arrives second to a multi-machine workstation leaves first.
Example 4 Consider the Gantt chart in Fig.11 which depicts what happened at a
three machine workstation. At t = 1, the first lot arrives at the workstation, service
at machine 1 is started, and service is completed at t = 25. At t = 2, the second lot
12. 20 E. Lefeber
Fig.11 Gantt chart of 4 lots
at a three machine
workstation, and the
corresponding realization for
the aggregate model
arrives at the workstation, service at machine 2 is started, and service is completed at
t = 14. At t = 4, the third lot arrives at the workstation. For some reason it is
not served at machine 3, but it waits to be served at machine 2. Its service at
machine 2 (effectively) starts at t = 14 and is completed at t = 20. Finally, the
fourth lot arrives at the workstation at t = 5, is served at machine 3, and leaves the
system at t = 9.
Intheaggregatemodel wemodel theresultinginput-output behavior of this system
differently. At t = 1, the first lot arrives and a timer is set, which expires at t = 9.
Meanwhile, the second lot arrives at t = 2 and is inserted at the head of the queue.
Next, the third lot arrives at t = 4, and is inserted in the middle of the queue, i.e.,
behind lot 2, but in front of lot 1. At t = 5, the fourth lot arrives which is inserted at
the head of the queue, i.e., it overtakes the three lots already in the queue. When the
timer expires at t = 9, the lot that is at the head of the queue leaves the system, i.e.,
lot 4 leaves the system. Then the timer is set again to expire at t = 14. Again, the
head of the queue leaves the system, which is lot 2. The timer is set again to expire
at t = 20, and lot 3 leaves the system. Next, the timer is set to ring at t = 25 and
finally lot 1 leaves the system.
For more details about this aggregate model for integrated processing worksta-
tions, including implementation issues and algorithms for deriving distributions from
real manufacturing data, the interested reader is referred to [19]. In that paper an
extensive simulation study and an industry case study demonstrate that the aggre-
gate model can accurately predict the cycle time distribution of integrated processing
workstations in semiconductor manufacturing.
Most importantly, EPTs can be determined from real manufacturing data and
yield relatively simple discrete event models of the manufacturing system under
consideration. These relatively simple discrete event models serve as a starting point
for controlling manufacturing systems.
3 Control Framework
In the previous section, the concept of Effective Process Times has been intro-
duced as a means to arrive at relatively simple discrete event models for manu-
facturing systems, using measurements from the real manufacturing system under
13. Modeling and Control of Manufacturing Systems 21
Manufacturing System
Discrete Event Model
Approximation Model
Controller
Manufacturing System
Discrete Event Model
Approximation Model
Controller
Conversion Conversion
(a) (b)
Fig.12 a Control framework (I). b Control framework (II)
consideration. This is the first step in a control framework. The resulting discrete
event models are large queueing networks which capture the dynamics reasonably
well. These relatively simple discrete event models are not only a starting point for
analyzing the dynamics of a manufacturing system, but can also be used as a starting
point for controller design. If one is able to control the dynamics of the discrete
event model of the manufacturing system, the resulting controller can also be used
for controlling the real manufacturing system.
Even though control theory exists for controlling discrete event systems, unfor-
tunately none of it is appropriate for controlling discrete event models of real-life
manufacturing systems. This is mainly due to the large number of states of a manu-
facturing system. Therefore, a different approach is needed.
If we concentrate on mass production, the distinction between lots is not really
necessary and lots can be viewed in a more continuous way. Instead of the discrete
event model we might consider an approximation model. This is the second step in the
control framework. Next, we can use standard control theory for deriving a controller
for the approximation model. These first three steps in the control framework are
illustrated in Fig.12a. We elaborate on this second and third step in the next two
sections. For now it is sufficient to know that time is discretized into periods (e.g.,
shifts) and that the resulting controller provides production targets per shift for each
machine. So for now we assume that the derived controller behaves as desired on
the approximation model. As a fourth step this controller could be connected to the
discrete event model. This cannot be done directly, since the derived controller is
not a discrete event controller. The control actions still need to be transformed into
events. It might very well be that the optimal control action is to produce 2.75 lots
during the next shift. One still needs to decide how many lots to really start (2 or 3),
and also when to start them. This is the left conversion block in Fig.12b. From this
figure, it can also be seen that a conversion is needed from discrete event model to
controller. In the remainder of this chapter we assume to sample the discrete event
model once every shift. Other strategies might be followed. For example, if at the
beginning of a shift a machine breaks down it might not be such a good idea to wait
until the end of the shift before setting new production targets. Designing proper
conversion blocks is the fourth step in the control framework.
14. 22 E. Lefeber
B1 M1 B2 M2 B3
u0 t u1 t u1 t te,1 u2 t u2 t te,2
x1 t x2 t x3 t
Fig.13 A simple manufacturing system
After the fourth step, i.e., properly designing the two conversion blocks, a suitable
discrete event controller for the discrete event model is obtained, as illustrated in
Fig.12b (dashed).
Eventually, as a fifth and final step, the designed controller can be disconnected
from the discrete event model, and attached to the manufacturing system.
4 An Approximation Model
The analytical approximations models of Sect.1.2 are only concerned with steady
state, no dynamic behavior is included. This disadvantage is overcome by discrete
event models as discussed in Sect.2, where each lot is modeled separately and
stochastically. In Sect.2 we derived how less detailed discrete event models can be
built by abstracting from all kinds of disturbances like machine failure, setups, oper-
ator behavior, etc. By aggregating all disturbances into one Effective Process Time,
a complex manufacturing system can be modeled as a relatively simple queueing
network. Furthermore, the data required for this model can easily be measured from
manufacturing data.
Even though this approach considerably reduces the complexity of discrete event
models for manufacturing systems, this aggregate model is still unsuitable for
manufacturing planning and control. Therefore, in this section we introduce a next
level of aggregation, by abstracting from events. Using the abstraction presented in
Sect.2 we can view a workstation as a node in a queueing network. In this section we
assume that such a node processes a deterministic continuous stream of fluid. That
is, we consider this queue as a so-called fluid queue.
For example, consider a simple manufacturing system consisting of two machines
in series, as displayed in Fig.13. Let te,i denote the Effective Process Time of the
ith machine for i ∈ {1, 2}. Furthermore, let u0(t) denote the rate at which lots arrive
at the system at time t, ui (t) the rate at machine Mi starts lots at time t, xi (t) the
number of lots in buffer Bi at time t (i ∈ {1, 2}) and x3(t) the cumulative number of
lots produced by the manufacturing system at time t.
The rate of change of the buffer contents is given by the difference between the
rates at which lots enter and leave the buffer, taking into account the time-delay due
to processing:
15. Modeling and Control of Manufacturing Systems 23
˙x1(t) = u0(t) − u1(t),
˙x2(t) = u1 t − te,1 − u2(t),
˙x3(t) = u2 t − te,2 .
(6)
In practice, manufacturing systems are often controlled by means of setting produc-
tion targets per shift. That is, time is divided into shifts for example, 8 or 12h. For
this period of 8 or 12h it is determined how many lots should be started on each
machine. The control problem then reduces to determining these production targets
per shift.
To that end, we sample the continuous time system (6) using a zero-order-hold
sampling, cf. [2]. Assuming that the longest Effective Process Time is less than
the duration of a shift, the resulting zero-order-hold sampling of the system in (6)
becomes
⎡
⎢
⎢
⎢
⎢
⎣
¯x1(k + 1)
¯x2(k + 1)
¯x3(k + 1)
¯x4(k + 1)
¯x5(k + 1)
⎤
⎥
⎥
⎥
⎥
⎦
=
⎡
⎢
⎢
⎢
⎢
⎣
1 0 0 0 0
0 1 0
te,1
h 0
0 0 1 0
te,2
h
0 0 0 0 0
0 0 0 0 0
⎤
⎥
⎥
⎥
⎥
⎦
⎡
⎢
⎢
⎢
⎢
⎣
¯x1(k)
¯x2(k)
¯x3(k)
¯x4(k)
¯x5(k)
⎤
⎥
⎥
⎥
⎥
⎦
+
⎡
⎢
⎢
⎢
⎢
⎣
1 −1 0
0
h−te,1
h −1
0 0
h−te,2
h
0 1 0
0 0 1
⎤
⎥
⎥
⎥
⎥
⎦
⎡
⎣
¯u0(k)
¯u1(k)
¯u2(k)
⎤
⎦ (7)
where ¯u0(k) denotes the number of lots arriving at the system during shift k, ¯ui (k)
the number of lots started at machine Mi during shift k, ¯xi (k) the number of lots in
buffer Bi at the beginning of shift k (i ∈ {1, 2}), and ¯x3(k) the cumulative number of
lots produced by the manufacturing system at the beginning of shift k. Furthermore,
h denotes the sample period, e.g., 8 or 12h. The auxiliary variables ¯x4(k) and ¯x5(k)
are required to remember the starts during the previous shift, in order to incorporate
the lots for which processing is started in shift k on machine M1 and M2 respectively
but completed in shift k+1. If the longest Effective Process Time exceeds the duration
of a shift, but not exceed the duration of two shifts, similarly auxiliary variables
¯x6(k), and ¯x7(k) are required.
The model (6) and its discrete time equivalent (7) are also subject to constraints. We
present the constraints for the model (7). For the model (6), similar constraints hold.
The first constraint is a non-negativity constraint: buffer contents can never be
negative. Also production targets cannot become negative. Expressed mathematically
we have the following constraints:
¯xi (k) ≥ 0 i ∈ {1, 2, 3, 4, 5} ∀k (8a)
¯u j (k) ≥ 0 j ∈ {1, 2, 3} ∀k (8b)
Furthermore, machines can produce at most at maximal capacity. That is, the total
time spent on serving the required number of lots during a shift cannot exceed the
duration of the shift:
te, j · ¯u j (k) ≤ h j ∈ {1, 2, 3} ∀k (8c)
where h again denotes the sample period or shift duration.
16. 24 E. Lefeber
Fig.14 Effective clearing function of (9) with ca = ce = m = 1
4.1 Clearing Functions
The model (7) with constraints (8) describes the dynamics of a manufacturing sys-
tem well. By incorporating delays due to processing, the minimal flow time is also
taken into account. Furthermore, steady-state corresponds with the mass conserva-
tion results presented in Sect.1.2.1.
Nevertheless, one property of manufacturing systems is not yet taken into account
in the model (7), (8). And that is the queueing relations (3).
In order not to lose the steady state queueing relation between throughput and
queue length, we include this relation as a system constraint.
Consider a workstation that consists of m identical servers in parallel that all
have a mean Effective Process Times te and coefficient of variation ce. Furthermore,
assume that the coefficient of variation of the interarrival times is ca and that the
utilization of this workstation is ρ < 1. Then we know from (3), (5) that in steady
state the mean number of lots in this workstation is approximately given by
x =
c2
a + c2
e
2
·
ρ
√
2(m+1)−1
m(1 − ρ)
+ ρ. (9)
In Fig.14 this relation has been depicted graphically. In the left-hand side of this
figure one can clearly see that for an increasing utilization, the number of lots in this
workstation increases nonlinearly. By swapping axes, this relation can be understood
differently. Depending on the number of lots in the workstation, a certain utilization
can be achieved, or a certain throughput. This has been depicted in the right-hand
side of Fig.14. This relation is also known as the clearing function as introduced
by [7].
For the purpose of production planning, this effective clearing function provides
an upper bound for the utilization of the workstation depending on the number of
lots in this workstation. Therefore, for the model (7), in addition to the constraints
(8) we also have (using ρ = ¯u · te/(h · m) and m = 1):
17. Modeling and Control of Manufacturing Systems 25
c2
a,1 + c2
e,1
2
·
u1(k)2
h
te,1
h
te,1
− u1(k)
+
te,1
h
u1(k) ≤ ¯x1(k) ∀k
c2
a,2 + c2
e,2
2
·
u2(k)2
h
te,2
h
te,2
− u2(k)
+
te,2
h
u2(k) ≤ ¯x2(k) ∀k.
(10)
The clearing function model for production planning then consists of the model
(7) together with the constraints (8) and (10). When we want to use this clearing
function model for production planning, we need the parameters ce and ca. In Sect.2
we explained how Effective Process Times can be determined for each workstation,
which provides us with the parameter ce for each workstation. Additionally, for
each workstation the interarrival times of lots can also be determined from arrival
events, which provides us with the parameter ca for each workstation. Therefore,
both parameters can easily be determined from manufacturing data.
We conclude this section with some remarks about the additional constraints (10).
The first remark is that these constraints are convex in the input u, so optimization
problems become “simple” convex optimization problems. A second remark is that
from a practical point of view, one can easily approximate each convex constraint by
means of several linear constraints. A third remark is that the constraints (10) only
hold for steady state, whereas our system is never in steady state. A more accurate
planning result is obtained by conditioning the expected throughput on the current
work in the buffer, resulting in so-called transient clearing functions. For the latter
subject, the interested reader is referred to [15].
5 Controller Design
In the previous section we derived a fluid model as an approximation for the discrete
event model derived earlier. The next step in the control framework presented in
Sect.3 is to control the approximation model using standard techniques from control
theory.
Typically two control problems can be distinguished: the trajectory generation
problem and the reference tracking control problem. The solution of the first problem
serves as an input for the second problem.
To illustrate the difference between these two problems, consider the problem of
automatically flying an airplane from A to B by means of an autopilot. Then also two
problems are solved separately. The first problem is to determine a trajectory for the
airplane to fly which brings it from A to B. The resulting flight plan is a solution to
the trajectory generation problem. The second problem is the design of the autopilot
itself. Given an arbitrary feasible reference trajectory for this airplane, how to make
sure that it is tracked as well as possible, despite all kinds of disturbances. The latter is
the reference tracking control problem. We follow a similar approach for the control
of manufacturing systems.
18. 26 E. Lefeber
5.1 Trajectory Generation Problem
The trajectory generation problem is the problem of finding a feasible reference
trajectory for the system, also known as production planning. So for the example
considered previously, the problem is to find a trajectory xr (k), ur (k) which sat-
isfies (7) as well as the constraints (8) and (10). Clearly, many trajectories exist that
meet these requirements. Typically, “the best” trajectory is looked for. Therefore,
the trajectory generation or production planning problem is often formulated as an
optimization problem.
Example 5 Consider the system described by (7) together with the constraints (8) and
(10). Assume that ca,i = ce,i = 1, te,i = 1 (i = 1, 2), h = 2, and that the cumulative
demand is given by xr,3(k) = k. If one would like to satisfy this cumulative demand
while having a minimal number of jobs in the system, the trajectory generation
problem can be formulated as the following optimization problem:
min
ur (k),xr (k)
N
k=1
x1(k) + x2(k)
subject to xr,3 = k k = 1, . . . , N
(7), (8), (10) k = 1, . . . , N
The solution to this problem is given by
xr,1(k) = 1 ur,0(k) = 1 k = 1, . . . , N
xr,2(k) = 1 ur,1(k) = 1 k = 1, . . . , N
xr,3(k) = k ur,2(k) = 1 k = 1, . . . , N
xr,4(k) = 1 k = 1, . . . , N
xr,5(k) = 1 k = 1, . . . , N.
(11)
5.2 Reference Tracking: Model-Based Predictive Control (MPC)
For the reference tracking control problem, we assume that an arbitrary feasible
reference trajectory is given. So for the example considered before we assume that
a reference trajectory xr (k), ur (k) is given which satisfies (7) together with the
constraints (8) and (10). This could for example be the trajectory (11), but any other
feasible reference trajectory can be used as a starting point as well. The goal in the
reference tracking control problem is to find an input u(k) which guarantees that the
system tracks this reference input, while meeting the constraints (8) and (10).
In order to solve the reference tracking control problem, the tracking error dynam-
ics is considered. For the remainder of this section we assume that the system dynam-
ics is described by
x(k + 1) = Ax(k) + Bu(k)
19. Modeling and Control of Manufacturing Systems 27
subject to the linear constraints
Ex(k) + Fu(k) ≤ g.
Without loss of generality this can be extended to nonlinear dynamics with nonlinear
constraints.
In addition, a feasible reference trajectory xr (k), ur (k) is given, i.e., a trajectory
which satisfies
xr (k + 1) = Axr (k) + Bur (k)
and
Exr (k) + Fur (k) ≤ g.
Next, one can define the tracking error ˜x(k) = x(k)− xr (k), and the input correction
˜u(k) = u(k) − ur (k). Then the tracking error dynamics becomes
˜x(k + 1) = A ˜x(k) + B ˜u(k) (12a)
subject to the constraints
E ˜x(k) + xr (k) + F ˜u(k) + ur (k) ≤ g
or
E ˜x(k) + F ˜u(k) ≤ g − Exr (k) − Fur (k) (12b)
Using these error coordinates, the reference tracking control problem can be for-
mulated as to find an input correction ˜u(k) which steers the error dynamics (12a)
toward 0, while satisfying the constraints (12b).
Sincewehaveasystemwithconstraints,themostsuitabletechniquefromstandard
control theory is Model-based Predictive Control (MPC).
The basic idea of MPC is to use the model of the system (12a) to predict the state
evolution as a function of future inputs. Furthermore, a cost function is used which
penalizes the predicted future deviations from the reference trajectory. This cost
function is then minimized over the future inputs, subject to the constraints (12b).
This optimization takes place over a so-called prediction horizon p, i.e., the first
p inputs are determined in this optimization problem. The resulting control action
then consists of the first of these inputs. One time period later, the entire procedure is
repeated. Therefore, MPC is also called a receding horizon stategy. This is illustrated
in Fig.15.
Assume that at time k, the tracking error ˜x(k) = ˜x(k|k) is measured. So we have
the tracking error ˜x at time k given that we are currently at time k. Using a horizon
of length p, we can define the input corrections for the times k, k +1, . . . , k + p −1
given that we are currently at time k: ˜u(k|k), ˜u(k + 1|k), . . . , ˜u(k + p − 1|k).
20. 28 E. Lefeber
Fig.15 The ingredients
of MPC
past future
Projected
Outputs
^y(k+1|k)
k+1k
y
k+2
horizon
k+p
u(k)
target
Manipulated
Variables
By means of the model (12a) we are able to predict the resulting tracking errors as
a function of these future input corrections:
⎡
⎢
⎢
⎢
⎣
˜x(k + 1|k)
˜x(k + 2|k)
...
˜x(k + p|k)
⎤
⎥
⎥
⎥
⎦
=
⎡
⎢
⎢
⎢
⎣
A
A2
...
Ap
⎤
⎥
⎥
⎥
⎦
x(k|k) +
⎡
⎢
⎢
⎢
⎢
⎣
B 0 . . . 0
AB
...
...
...
...
...
... 0
Ap−1 B . . . AB B
⎤
⎥
⎥
⎥
⎥
⎦
⎡
⎢
⎢
⎢
⎣
˜u(k|k)
˜u(k + 1|k)
...
˜u(k + p − 1|k)
⎤
⎥
⎥
⎥
⎦
(13)
Next we define a cost function for having a non-zero tracking error. One of the
properties of our controlled system is that if we happen to be on the reference, we
should stay on the reference. In particular this implies that the cost function should
be such that costs are 0 if and only if the system stay in (˜x, ˜u) = (0, 0).
In control theory often a quadratic cost function is used:
min
u(k|k),...u(k+p−1|k)
p
i=1
x(k+i|k)T
Qx(k+i|k)+u(k+i −1|k)T
Ru(k+i −1|k) (14)
with Q = QT ≥ 0 and R = RT > 0. But also other cost functions can be used,
e.g., linear cost functions. What is most important is that costs are 0 if and only if the
system stays in (˜x, ˜u) = (0, 0). Clearly the minimization should take place subject
to the constraints (12b). Using a quadratic cost function as in (14) results in a QP
(quadratic program) to be solved each time instant, whereas a linear cost function
results in an LP (linear program), see e.g., [18].
The result from solving the above-mentioned optimization problem is a vector of
future input corrections ˜u(k|k), ˜u(k + 1|k), . . . , ˜u(k + p − 1|k). At time k the input
˜u(k|k) is applied. Subsequently, at time k + 1 the whole procedure starts all over
again.
We conclude this section with some remarks. First, the stability of the MPC
approach is not guaranteed. At least not in the way as presented here. In order to
achieve guaranteed stability, one should take the horizon p = ∞. This is not desirable
from a practical point of view. A second way of achieving stability is by adding the
21. Modeling and Control of Manufacturing Systems 29
terminal constraint that after the horizon, the system should be on the reference, i.e.,
one could add the constraint that ˜x(k + p) = 0. Notice that in order to have a feasible
optimization problem, again one should take p large enough.
For more information about MPC, the interested reader is referred to [5].
6 Concluding Remarks
In this chapter we provided a framework within which concepts from the field of
systems and control can be used for controlling manufacturing systems. We presented
the concept of Effective Process Times (EPTs) which can be used for modeling a
manufacturing system as a large queuing network. Restricting ourselves to mass
production enabled us to model manufacturing systems by means of a linear system
subject to nonlinear constraints (clearing functions). These models then served as a
starting point for designing controllers for these manufacturing systems using Model-
based Predictive Control (MPC). Thoughout this chapter we provided examples
to illustrate the most important ideas and concepts. We also provided additional
references for the interested reader.
We presented MPC as a possible approach from control theory for controlling
manufacturing systems. But many more suitable approaches can be used, ranging
from classical control theory using z-transforms and transfer functions, dynamic
programming and optimal control, to robust control and approximate dynamic pro-
gramming. A good overview of these kinds of approaches for the dynamic modeling
and control of supply chains has been provided in the review paper [17].
But also the approximation model presented in Sect.4 is only one of the possible
choices for modeling manufacturing systems. An overview on aggregate models
for manufacturing systems has been given in [13]. In the model presented here a
fluid approximation has been presented where the number of jobs was modeled
continuously, but the position in the factory was modeled discretely. Using a less
detailed model, we can even abstract from workstations and model manufacturing
flow as a real fluid using continuum models [1, 3, 6]. Optimal control of PDE models
for manufacturing systems has been presented in [12].
From the above it is clear that the modeling and control of manufacturing systems
has been, and still is, an open and active research area. In this chapter we provided
some of the basic models and standard control approaches, illustrated by examples
so that they can be applied straighforwardly.
Acknowledgements Erjen Lefeber is supported by the Netherlands Organization for Scientific
Research (NWO-VIDI grant 639.072.072).
22. 30 E. Lefeber
References
1. Armbruster D, Marthaler DE, Ringhofer C, Kempf K, Jo TC (2006) A continuum model for a
re-entrant factory. Operations Research 54(5):933–950
2. Åström KJ, Wittenmark B (1990) Computer-controlled systems: theory and design, 2nd edn.
Prentice-Hall, Englewood Cliffs
3. Daganzo CF (2003) A Theory of Supply Chains. Springer, New York
4. Fischbein S, Yellig E (2011) Why is it so hard to build and validate discrete event simulation
models of manufacturing facilities. In: Kempf KG, Uzsoy R, Keskinocak P (eds) Planning
production and inventories in the extended enterprise: a state of the art handbook, volume 2,
Springer international series in operations research and management science, vol 152, chap 12.
Springer, New York pp 271–288
5. Garcia CE, Prett DM, Morari M(1989) Model predictive control: theory and practice—a survey.
Automatica 25(3):335–348
6. Göttlich S, Herty M, Klar A (2005) Network models for supply chains. Commun Math Sci
3(4):545–559
7. Graves SC (1986) A tactical planning model for a job shop. Oper Res 34(4):522–533
8. Hopp WJ, Spearman ML (2000) Fact Physics, 2nd edn. McGraw-Hill, New York
9. Jacobs JH, Etman LFP, Campen EJJv, Rooda JE (2003) Characterization of the operational
time variability using effective processing times. IEEE Trans Semicond Manuf 16(3):511–520
10. Kingman JFC (1961) The single server queue in heavy traffic. Proc Camb Philos Soc 57:902–
904
11. Kuehn PJ (1979) Approximate analysis of general queueing networks by decomposition. IEEE
Trans Commun 27:113–126
12. La Marca M, Armbruster D, Herty M, Ringhofer C (2010) Control of continuum models of
production systems. IEEE Trans Autom Control 55(11):2511–2526
13. Lefeber E, Armbruster D (2011) Aggregate modeling of manufacturing systems. In: Kempf KG,
Uzsoy R, Keskinocak P (eds) Planning production and inventories in the extended enterprise:
a state of the art handbook, volume 1, Springer international series in operations research and
management science, vol 151, chap 17, pp 509–536 Springer, New York.
14. Little JDC (1961) A proof of the queueing formula l = λw. Oper Res 9:383–387
15. Missbauer H (2009) Models of the transient behaviour of production units to optimize the
aggregate material flow. Int J Prod Econ 118:387–397
16. Sakasegawa H (1977) An approximation formula Lq ≈ αρβ/(1 − ρ). Ann Inst Stat Mech
29:67–75
17. Sarimveis H, Patrinos P, Tarantilis CD, Kiranoudis CT (2008) Dynamic modeling and control
of supply chain systems: A review. Comput Oper Res 35(11):3530–3561
18. Vargas-Villamil FD, Rivera DE, Kempf (2003) A hierarchical approach to production control of
reentrant semiconductor manufacturing lines. IEEE Trans Control Syst Technol 11(4):578–587
19. Veeger CPL, Etman LFP, Lefeber E, Adan IJBF, Herk Jv, Rooda JE (2011) Predicting cycle
time distributions for integrated processing workstations: an aggregate modeling approach.
IEEE Trans Semicond Manuf 24(2):223–236