The document presents a two-level approach for solving stochastic planning problems in operating rooms. At the first level, a deterministic model is used to allocate block times to specialties. At the second level, a stochastic model incorporates random durations to determine if solutions are feasible with high probability. Safety slacks are calculated for blocks likely to exceed durations and fed back into the deterministic model in an iterative process until a robust solution is found. Monte Carlo simulation and the Fenton-Wilkinson approximation are also discussed to model lognormal durations. The approach is applied preliminarily to an operating room case study.
The document proposes using a Gamma process with noise model to model degradation phenomena and estimate remaining useful life (RUL) based on degradation measurements. It constructs a degradation indicator from sensor data and models deterioration with a non-stationary Gamma process. It then uses a Gibbs sampling algorithm and stochastic expectation maximization to estimate model parameters and the RUL distribution. The method is applied to the 2008 Prognostic Health Management Challenge data, achieving a root mean squared error similar to the winning approaches.
The document discusses developing a wireless sensor network system for structural health monitoring using non-destructive evaluation techniques like acoustic emission testing and ultrasound testing. It outlines objectives like sensor node development, network control, and damage detection algorithms. The project status updates sensor node development and a finite element model for lamb wave propagation. Future plans include more signal processing algorithms and investigating additional non-destructive methods.
Condition Monitoring Of Unsteadily Operating EquipmentJordan McBain
The document discusses techniques for condition monitoring of unsteadily operating equipment. It proposes a statistical parameterization approach involving segmenting vibration data based on steady speeds/loads, extracting statistical parameters from segments, and using novelty detection with support vectors to classify patterns as normal or faulted while accounting for changing operating conditions. Experimental results on gearbox data demonstrated superior fault detection performance compared to alternative approaches.
This document contains notes from a Calculus I class at New York University. It discusses related rates problems, which involve taking derivatives of equations relating changing quantities to determine rates of change. The document provides examples of related rates problems involving an oil slick, two people walking towards and away from each other, and electrical resistors. It also outlines strategies for solving related rates problems, such as drawing diagrams, introducing notation, relating quantities with equations, and using the chain rule to solve for unknown rates.
1. The document discusses various applications of artificial neural networks (ANNs) such as pattern classification, clustering, forecasting, association, and summarization of news articles.
2. It provides examples of how ANNs can be used to classify images and documents into different groups or events. The architecture of a multi-document news summarization system using ANNs is shown.
3. The biological mechanisms of neural networks in the human brain are compared with artificial neural networks. Examples of different activation functions in artificial neurons and learning algorithms like the perceptron are presented.
Stacks follow the LIFO (last in, first out) principle. They are commonly implemented using arrays, where elements are pushed and popped from one end of the array to enforce the LIFO behavior. This overrides the random access of regular arrays. Common stack operations include push to add an element, pop to remove the top element, peek to access the top element without removing it, and checks for empty or full stacks. Stacks have many applications like function calls, undo/redo operations, parsing expressions etc.
The document outlines data structures and algorithms, including analysis of complexity, common data structures like arrays, stacks, queues, linked lists, and sorting algorithms like merge sort and quick sort. It provides an overview of these topics along with examples of analyzing time complexity using Big-O notation.
A current perspectives of corrected operator splitting (os) for systemsAlexander Decker
This document discusses operator splitting methods for solving systems of convection-diffusion equations. It begins by introducing operator splitting, where the time evolution is split into separate steps for convection and diffusion. While efficient, operator splitting can produce significant errors near shocks.
The document then examines the nonlinear error mechanism that causes issues for operator splitting near shocks. When a shock develops in the convection step, it introduces a local linearization that neglects self-sharpening effects. This leads to splitting errors.
To address this, the document discusses corrected operator splitting, which uses the wave structure from the convection step to identify where nonlinear splitting errors occur. Terms are added to the diffusion step to compensate for
The document proposes using a Gamma process with noise model to model degradation phenomena and estimate remaining useful life (RUL) based on degradation measurements. It constructs a degradation indicator from sensor data and models deterioration with a non-stationary Gamma process. It then uses a Gibbs sampling algorithm and stochastic expectation maximization to estimate model parameters and the RUL distribution. The method is applied to the 2008 Prognostic Health Management Challenge data, achieving a root mean squared error similar to the winning approaches.
The document discusses developing a wireless sensor network system for structural health monitoring using non-destructive evaluation techniques like acoustic emission testing and ultrasound testing. It outlines objectives like sensor node development, network control, and damage detection algorithms. The project status updates sensor node development and a finite element model for lamb wave propagation. Future plans include more signal processing algorithms and investigating additional non-destructive methods.
Condition Monitoring Of Unsteadily Operating EquipmentJordan McBain
The document discusses techniques for condition monitoring of unsteadily operating equipment. It proposes a statistical parameterization approach involving segmenting vibration data based on steady speeds/loads, extracting statistical parameters from segments, and using novelty detection with support vectors to classify patterns as normal or faulted while accounting for changing operating conditions. Experimental results on gearbox data demonstrated superior fault detection performance compared to alternative approaches.
This document contains notes from a Calculus I class at New York University. It discusses related rates problems, which involve taking derivatives of equations relating changing quantities to determine rates of change. The document provides examples of related rates problems involving an oil slick, two people walking towards and away from each other, and electrical resistors. It also outlines strategies for solving related rates problems, such as drawing diagrams, introducing notation, relating quantities with equations, and using the chain rule to solve for unknown rates.
1. The document discusses various applications of artificial neural networks (ANNs) such as pattern classification, clustering, forecasting, association, and summarization of news articles.
2. It provides examples of how ANNs can be used to classify images and documents into different groups or events. The architecture of a multi-document news summarization system using ANNs is shown.
3. The biological mechanisms of neural networks in the human brain are compared with artificial neural networks. Examples of different activation functions in artificial neurons and learning algorithms like the perceptron are presented.
Stacks follow the LIFO (last in, first out) principle. They are commonly implemented using arrays, where elements are pushed and popped from one end of the array to enforce the LIFO behavior. This overrides the random access of regular arrays. Common stack operations include push to add an element, pop to remove the top element, peek to access the top element without removing it, and checks for empty or full stacks. Stacks have many applications like function calls, undo/redo operations, parsing expressions etc.
The document outlines data structures and algorithms, including analysis of complexity, common data structures like arrays, stacks, queues, linked lists, and sorting algorithms like merge sort and quick sort. It provides an overview of these topics along with examples of analyzing time complexity using Big-O notation.
A current perspectives of corrected operator splitting (os) for systemsAlexander Decker
This document discusses operator splitting methods for solving systems of convection-diffusion equations. It begins by introducing operator splitting, where the time evolution is split into separate steps for convection and diffusion. While efficient, operator splitting can produce significant errors near shocks.
The document then examines the nonlinear error mechanism that causes issues for operator splitting near shocks. When a shock develops in the convection step, it introduces a local linearization that neglects self-sharpening effects. This leads to splitting errors.
To address this, the document discusses corrected operator splitting, which uses the wave structure from the convection step to identify where nonlinear splitting errors occur. Terms are added to the diffusion step to compensate for
This document discusses unconditionally stable finite-difference time-domain (FDTD) methods for solving Maxwell's equations numerically. It outlines FDTD algorithms such as Yee's method from 1966 which discretize the equations on a staggered grid. It also discusses the von Neumann stability analysis and compares implicit Crank-Nicolson and alternating-direction implicit methods to conventional explicit FDTD methods. The document notes the advantages of unconditionally stable methods but also mentions potential disadvantages.
Multi-Objective Optimization Algorithms for Finite Element Model Updating. Nt...Evangelos Ntotsios
The document discusses multi-objective optimization algorithms for finite element model updating using measured modal data. It presents different frameworks for structural identification, including weighted modal residuals and multi-objective formulations. Computational issues related to single-objective and multi-objective optimization are discussed. An example application to identify the parameters of a full-scale bridge model using ambient vibration data is also outlined.
Quantifying the call blending balance in two way communication retrial queues...wrogiest
This document describes research into distinguishing between two models (A and B) of call sequences in a call center using a single server. Model A uses a classical retrial rate, while Model B uses a constant retrial rate. The key difference studied is the short-term correlation between incoming and outgoing call types, quantified by the correlation coefficient γ. Numerical examples calculating γ under Model B demonstrate it can be positive when outgoing call activity is limited, positive when the time share of incoming/outgoing calls is matched, and strictly negative when call durations are strongly mismatched. The goal is to compare γ between Models A and B to distinguish the two models.
The document outlines auto-regressive (AR) processes of order p. It begins by introducing AR(p) processes formally and discussing white noise. It then derives the first and second moments of an AR(p) process. Specific details are provided about AR(1) and AR(2) processes, including equations for their variance as a function of the noise variance and AR coefficients. Examples of simulated AR(1) processes are shown for different coefficient values.
Benchmark Calculations of Atomic Data for Modelling ApplicationsAstroAtom
This document summarizes benchmark calculations of atomic data for modeling applications. It discusses numerical methods like close-coupling and distorted-wave approaches for calculating atomic collision data. It provides selected results on energy levels, oscillator strengths, and electron-impact excitation cross sections. It also discusses applications to modeling neon discharges and takes a closer look at ionization calculations and examples. The document concludes by discussing the production and assessment of atomic data and outlines challenges in obtaining reliable data from both experiments and calculations.
PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additi...Taiji Suzuki
The document discusses the aggregated estimator technique for sparse estimation. The aggregated estimator averages over multiple models, each weighted by their risk. This allows fast learning rates without strong assumptions on the design matrix. The technique is applied to sparse regression problems using an exponential screening estimator. The risk bound of this estimator is compared to other estimators like BIC and Lasso, showing it provides a tighter bound.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes a queueing model with two component mixture of doubly truncated exponential service times. The service time distribution is a two component mixture of doubly truncated exponential distributions, which can characterize heterogeneous and finite range service times. Assuming Poisson arrivals, the embedded Markov chain technique is used to analyze the system. Explicit expressions are derived for performance measures like average number of customers, average waiting time, throughput, and probability of idleness. Numerical analysis studies the sensitivity of performance measures to parameter changes. The model includes two component mixture of exponential, doubly truncated exponential, and exponential service time models as special cases.
This document summarizes an optimization of the TINKER classical molecular dynamics code to improve performance while maintaining readability. It discusses using compiler flags, reducing cache misses, and lookup tables. Compiler optimizations like -O2 improved performance by up to 20%. Summing intermediate values into temporary scalars reduced cache misses and provided an 8% speedup. Pre-computing common mathematical functions like sqrt and exp into lookup tables improved performance further.
The document discusses scalar quantization and the Lloyd-Max algorithm. It provides examples of using the Lloyd-Max algorithm to design scalar quantizers for Gaussian and Laplacian distributed signals. The algorithm works by iteratively calculating decision thresholds and representative levels to minimize mean squared error. At high rates, the distortion-rate function of a Lloyd-Max quantizer is approximated. The document also discusses entropy-constrained scalar quantization and an iterative algorithm to design those quantizers.
This document discusses likelihood methods for continuous-time models in finance. It describes approximating the transition density function pX of a continuous-time process through a series of transformations to get closer to a normal distribution. This allows representing pX as a series expansion involving Hermite polynomials. Computing the expansion coefficients allows obtaining an explicit closed-form approximation to pX. Maximizing the approximate likelihood results in an estimator that converges to the true MLE as the number of terms increases.
Cosmological Perturbations and Numerical SimulationsIan Huston
Talk given at Queen Mary, University of London in March 2010.
Cosmological perturbation theory is well established as a tool for
probing the inhomogeneities of the early universe.
In this talk I will motivate the use of perturbation theory and
outline the mathematical formalism. Perturbations beyond linear order
are especially interesting as non-Gaussian effects can be used to
constrain inflationary models.
I will show how the Klein-Gordon equation at second order, written in
terms of scalar field variations only, can be numerically solved.
The slow roll version of the second order source term is used and the
method is shown to be extendable to the full equation. This procedure
allows the evolution of second order perturbations in general and the
calculation of the non-Gaussianity parameter in cases where there is
no analytical solution available.
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix FormatThomas Mach
Talk given at ENUMATH 2011 in Leicester and GAMM ANLA Workshop 2011 in Bremen. There is a preprint available under http://www.mpi-magdeburg.mpg.de/preprints/index.php
The document presents a multi-frame marked point process model for extracting targets from ISAR (Inverse Synthetic Aperture Radar) image sequences. The model integrates information across frames using priors on target shape persistency and smooth motion. Experiments show the model achieves better target line and center extraction compared to frame-by-frame detection. Future work involves generalizing the model to identify other objects like airplanes and using extracted features for target classification.
The document discusses quantization in analog-to-digital conversion. It describes the three processes of A/D conversion as sampling, quantization, and binary encoding. Quantization involves mapping amplitude values into a set of discrete values using a quantization interval or step size. The document discusses uniform quantization and how the range is divided into equal intervals. It also discusses non-uniform quantization which has smaller intervals near zero to better match real audio signals. Examples and MATLAB code demonstrations are provided to illustrate quantization of audio signals at different bit rates.
The document discusses deep feedforward networks, also known as multilayer perceptrons. It begins with an introduction to feedforward networks, which apply vector-to-vector functions across multiple hidden layers without feedback connections between layers. Each hidden layer consists of units that resemble neurons. The document then covers gradient-based learning, different cost functions, types of output and hidden units like ReLU, and considerations for network architecture such as depth, width, and universal approximation properties.
The document provides an example of using the substitution method to evaluate the indefinite integral ∫(x2 + 3)3 4x dx. It introduces the substitution u = x2 + 3, which allows the integral to be rewritten as ∫u3 2 du and then evaluated as (1/2)u4 = (1/2)(x2 + 3)4. The solution is compared to directly integrating the expanded polynomial. The document outlines the theory and notation of substitution for indefinite integrals.
Regularization is used in deep learning to reduce generalization error by modifying the learning algorithm. Common regularization techniques for deep neural networks include:
1) Parameter norm penalties like L2 and L1 regularization that penalize the weights of a network. This encourages simpler models that generalize better.
2) Early stopping which obtains the model parameters at the point of lowest validation error during training, rather than at the end of training.
3) Data augmentation which creates additional fake training data through techniques like translation to improve robustness.
1) The document presents theorems for determining the univalence of integral operators of certain forms. Specifically, it obtains conditions for the univalence of operators of the form F(z) and H(z), which involve integrals involving functions g_i(s) that satisfy certain constraints.
2) It proves Theorem 1, which provides conditions on the parameter λ for which the operator F(z) is univalent, and Theorem 2, which provides conditions on the parameters Reλ and Imλ for which the operator H(z) is univalent.
3) The proofs involve applying known results about the Schwarzian derivative and properties of functions satisfying the constraint that |g(z)| ≤
Conceptual approach to measure the potential of Urban Heat Islands from Landu...Beniamino Murgante
Conceptual approach to measure the potential of Urban Heat Islands from Landuse datasets and Landuse projections
Christian Daneke, Benjamin Bechtel, Jürgen Böhner,Thomas Langkamp,
Jürgen Oßenbrügge - University Hamburg
Resilient city and seismic risk: a spatial multicriteria approachBeniamino Murgante
Resilient city and seismic risk: a spatial multicriteria approach
Lucia Tilio, Beniamino Murgante, Francesco Di Trani, Marco Vona, Angelo Masi - University of Basilicata
A Multicriteria Model for Strategic Implementation of Business Process Manage...CONFENIS 2012
Ana Carolina Scanavachi Moreira Campos, Ana Paula Costa, Adiel Almeida, Daniela Calabria, A Multicriteria Model for Strategic Implementation of Business Process Management
This document discusses unconditionally stable finite-difference time-domain (FDTD) methods for solving Maxwell's equations numerically. It outlines FDTD algorithms such as Yee's method from 1966 which discretize the equations on a staggered grid. It also discusses the von Neumann stability analysis and compares implicit Crank-Nicolson and alternating-direction implicit methods to conventional explicit FDTD methods. The document notes the advantages of unconditionally stable methods but also mentions potential disadvantages.
Multi-Objective Optimization Algorithms for Finite Element Model Updating. Nt...Evangelos Ntotsios
The document discusses multi-objective optimization algorithms for finite element model updating using measured modal data. It presents different frameworks for structural identification, including weighted modal residuals and multi-objective formulations. Computational issues related to single-objective and multi-objective optimization are discussed. An example application to identify the parameters of a full-scale bridge model using ambient vibration data is also outlined.
Quantifying the call blending balance in two way communication retrial queues...wrogiest
This document describes research into distinguishing between two models (A and B) of call sequences in a call center using a single server. Model A uses a classical retrial rate, while Model B uses a constant retrial rate. The key difference studied is the short-term correlation between incoming and outgoing call types, quantified by the correlation coefficient γ. Numerical examples calculating γ under Model B demonstrate it can be positive when outgoing call activity is limited, positive when the time share of incoming/outgoing calls is matched, and strictly negative when call durations are strongly mismatched. The goal is to compare γ between Models A and B to distinguish the two models.
The document outlines auto-regressive (AR) processes of order p. It begins by introducing AR(p) processes formally and discussing white noise. It then derives the first and second moments of an AR(p) process. Specific details are provided about AR(1) and AR(2) processes, including equations for their variance as a function of the noise variance and AR coefficients. Examples of simulated AR(1) processes are shown for different coefficient values.
Benchmark Calculations of Atomic Data for Modelling ApplicationsAstroAtom
This document summarizes benchmark calculations of atomic data for modeling applications. It discusses numerical methods like close-coupling and distorted-wave approaches for calculating atomic collision data. It provides selected results on energy levels, oscillator strengths, and electron-impact excitation cross sections. It also discusses applications to modeling neon discharges and takes a closer look at ionization calculations and examples. The document concludes by discussing the production and assessment of atomic data and outlines challenges in obtaining reliable data from both experiments and calculations.
PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additi...Taiji Suzuki
The document discusses the aggregated estimator technique for sparse estimation. The aggregated estimator averages over multiple models, each weighted by their risk. This allows fast learning rates without strong assumptions on the design matrix. The technique is applied to sparse regression problems using an exponential screening estimator. The risk bound of this estimator is compared to other estimators like BIC and Lasso, showing it provides a tighter bound.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes a queueing model with two component mixture of doubly truncated exponential service times. The service time distribution is a two component mixture of doubly truncated exponential distributions, which can characterize heterogeneous and finite range service times. Assuming Poisson arrivals, the embedded Markov chain technique is used to analyze the system. Explicit expressions are derived for performance measures like average number of customers, average waiting time, throughput, and probability of idleness. Numerical analysis studies the sensitivity of performance measures to parameter changes. The model includes two component mixture of exponential, doubly truncated exponential, and exponential service time models as special cases.
This document summarizes an optimization of the TINKER classical molecular dynamics code to improve performance while maintaining readability. It discusses using compiler flags, reducing cache misses, and lookup tables. Compiler optimizations like -O2 improved performance by up to 20%. Summing intermediate values into temporary scalars reduced cache misses and provided an 8% speedup. Pre-computing common mathematical functions like sqrt and exp into lookup tables improved performance further.
The document discusses scalar quantization and the Lloyd-Max algorithm. It provides examples of using the Lloyd-Max algorithm to design scalar quantizers for Gaussian and Laplacian distributed signals. The algorithm works by iteratively calculating decision thresholds and representative levels to minimize mean squared error. At high rates, the distortion-rate function of a Lloyd-Max quantizer is approximated. The document also discusses entropy-constrained scalar quantization and an iterative algorithm to design those quantizers.
This document discusses likelihood methods for continuous-time models in finance. It describes approximating the transition density function pX of a continuous-time process through a series of transformations to get closer to a normal distribution. This allows representing pX as a series expansion involving Hermite polynomials. Computing the expansion coefficients allows obtaining an explicit closed-form approximation to pX. Maximizing the approximate likelihood results in an estimator that converges to the true MLE as the number of terms increases.
Cosmological Perturbations and Numerical SimulationsIan Huston
Talk given at Queen Mary, University of London in March 2010.
Cosmological perturbation theory is well established as a tool for
probing the inhomogeneities of the early universe.
In this talk I will motivate the use of perturbation theory and
outline the mathematical formalism. Perturbations beyond linear order
are especially interesting as non-Gaussian effects can be used to
constrain inflationary models.
I will show how the Klein-Gordon equation at second order, written in
terms of scalar field variations only, can be numerically solved.
The slow roll version of the second order source term is used and the
method is shown to be extendable to the full equation. This procedure
allows the evolution of second order perturbations in general and the
calculation of the non-Gaussianity parameter in cases where there is
no analytical solution available.
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix FormatThomas Mach
Talk given at ENUMATH 2011 in Leicester and GAMM ANLA Workshop 2011 in Bremen. There is a preprint available under http://www.mpi-magdeburg.mpg.de/preprints/index.php
The document presents a multi-frame marked point process model for extracting targets from ISAR (Inverse Synthetic Aperture Radar) image sequences. The model integrates information across frames using priors on target shape persistency and smooth motion. Experiments show the model achieves better target line and center extraction compared to frame-by-frame detection. Future work involves generalizing the model to identify other objects like airplanes and using extracted features for target classification.
The document discusses quantization in analog-to-digital conversion. It describes the three processes of A/D conversion as sampling, quantization, and binary encoding. Quantization involves mapping amplitude values into a set of discrete values using a quantization interval or step size. The document discusses uniform quantization and how the range is divided into equal intervals. It also discusses non-uniform quantization which has smaller intervals near zero to better match real audio signals. Examples and MATLAB code demonstrations are provided to illustrate quantization of audio signals at different bit rates.
The document discusses deep feedforward networks, also known as multilayer perceptrons. It begins with an introduction to feedforward networks, which apply vector-to-vector functions across multiple hidden layers without feedback connections between layers. Each hidden layer consists of units that resemble neurons. The document then covers gradient-based learning, different cost functions, types of output and hidden units like ReLU, and considerations for network architecture such as depth, width, and universal approximation properties.
The document provides an example of using the substitution method to evaluate the indefinite integral ∫(x2 + 3)3 4x dx. It introduces the substitution u = x2 + 3, which allows the integral to be rewritten as ∫u3 2 du and then evaluated as (1/2)u4 = (1/2)(x2 + 3)4. The solution is compared to directly integrating the expanded polynomial. The document outlines the theory and notation of substitution for indefinite integrals.
Regularization is used in deep learning to reduce generalization error by modifying the learning algorithm. Common regularization techniques for deep neural networks include:
1) Parameter norm penalties like L2 and L1 regularization that penalize the weights of a network. This encourages simpler models that generalize better.
2) Early stopping which obtains the model parameters at the point of lowest validation error during training, rather than at the end of training.
3) Data augmentation which creates additional fake training data through techniques like translation to improve robustness.
1) The document presents theorems for determining the univalence of integral operators of certain forms. Specifically, it obtains conditions for the univalence of operators of the form F(z) and H(z), which involve integrals involving functions g_i(s) that satisfy certain constraints.
2) It proves Theorem 1, which provides conditions on the parameter λ for which the operator F(z) is univalent, and Theorem 2, which provides conditions on the parameters Reλ and Imλ for which the operator H(z) is univalent.
3) The proofs involve applying known results about the Schwarzian derivative and properties of functions satisfying the constraint that |g(z)| ≤
Conceptual approach to measure the potential of Urban Heat Islands from Landu...Beniamino Murgante
Conceptual approach to measure the potential of Urban Heat Islands from Landuse datasets and Landuse projections
Christian Daneke, Benjamin Bechtel, Jürgen Böhner,Thomas Langkamp,
Jürgen Oßenbrügge - University Hamburg
Resilient city and seismic risk: a spatial multicriteria approachBeniamino Murgante
Resilient city and seismic risk: a spatial multicriteria approach
Lucia Tilio, Beniamino Murgante, Francesco Di Trani, Marco Vona, Angelo Masi - University of Basilicata
A Multicriteria Model for Strategic Implementation of Business Process Manage...CONFENIS 2012
Ana Carolina Scanavachi Moreira Campos, Ana Paula Costa, Adiel Almeida, Daniela Calabria, A Multicriteria Model for Strategic Implementation of Business Process Management
This document discusses operations research (OR) and its role in managerial decision making. It provides definitions of OR, describes common OR techniques like linear programming, and gives examples of OR applications in areas like production, marketing, finance, and personnel management. It also discusses the evolution of OR over multiple generations and limitations of linear programming as an OR technique. Several examples of linear programming problems in manufacturing and logistics settings are presented to illustrate the use of LP models.
Locating a waste treatment facility by using stochasticFARID YUNOS
This document discusses locating a new waste treatment facility in Finland using stochastic multicriteria acceptability analysis with ordinal criteria. It describes the problem of current landfills being unable to satisfy waste treatment requirements. Six alternatives were identified but three were discarded, leaving four for analysis. Criteria were established and alternatives were ranked ordinally. Analysis showed Kukkuroinmaki and Herttuanvuori as the best options. Additional constraints supported selecting Kukkuroinmaki as the new waste treatment location. The environmental impact assessment process engaged citizens and considered their opinions.
Transmedia Storytelling - Second Thoughts About Cultural ManagementMontecarlo -
Slides of my presentation at cultural association "Espais Escrits": a reflection about the changes that technology has brought to society, with special focus on the cultural arena, and how transmedia storytelling articulates a new, more complex discussion.
Biometrics in Government Post-9/11: Advancing Science, Enhancing Operations Duane Blackburn
This report summarizes key US government initiatives since 2001 to advance biometric science and utilize biometrics to meet operational needs. Major activities include research to improve face, fingerprint, iris, and multimodal biometrics; developing standards; and operational use by DOD, DHS, DOJ, and DOS for applications like border security, law enforcement, intelligence, and access control. Interagency collaboration has been important for driving innovation and achieving interoperability across systems.
Using Game Theory To Gain The Upper Hand During Contract Negotiations 2009Mickey Duke
An example of the creative use of "Game Theory" approach to develop a managed care strategy for a multi-hospital system in the southwest United States -- resulting in 85% increase in hospital net revenue per inpatient day over 7 years.
All companies need to be more effectively than ever before. In the current financial climate, every dollar invested is important and know that your business is operating efficiently is an imperative need, but as a Manager not always easy to know if the decisions are really the best for your company.
1. The document describes an incident management system for central Arkansas that uses motorist assistance patrols, towing services, and traffic management tools to respond to incidents.
2. It outlines models to optimize response to incidents, including algorithms to allocate response vehicles and route them while minimizing delays.
3. The system aims to reduce the impacts of incidents through detection techniques, appropriate response strategies, and benefit-cost analysis.
The document describes the two phase method for solving linear programming problems. In phase I, artificial variables are introduced to obtain an initial basic feasible solution. The objective is to minimize the artificial variables subject to the original constraints. In phase II, the original objective function is optimized using the feasible solution from phase I as the starting point, without the artificial variables. Two examples are provided to illustrate the two phase method.
Design of 210 Mld Sewage Treatment PlantARUN KUMAR
This document provides details on the design of a 210 million liter per day sewage treatment plant. It discusses the need for the plant to treat sewage and prevent pollution. It then describes the three main stages of sewage treatment - primary, secondary, and tertiary treatment. Primary treatment involves removing solids and debris. Secondary treatment uses microorganisms to break down dissolved organic matter. Tertiary treatment further polishes the water with methods like filtration and chlorination before discharge.
The document is the fourth edition of the textbook "Engineering Optimization: Theory and Practice" by Singiresu S. Rao. It covers optimization theory and techniques applied to engineering problems. The book contains chapters on classical optimization methods, linear programming, nonlinear programming, and geometric programming. It provides theoretical background and numerical examples to illustrate optimization concepts and their application to engineering design problems.
Multi criteria decision support system on mobile phone selection with ahp and...Reza Ramezani
This document proposes using multi-criteria decision making (MCDM) approaches, specifically the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), to help users select a mobile phone. It outlines the evaluation process, which involves identifying important mobile phone selection criteria, calculating criteria weights using AHP, and then using TOPSIS to rank mobile phone alternatives based on how close they are to an ideal solution and how far they are from a negative ideal solution. The document provides examples of building pairwise comparison matrices in AHP and calculating ideal and non-ideal solutions and alternative distances in TOPSIS to demonstrate the selection approach.
Design of sewerage collection system and cost estimationVijay Kumar
Vijay Kumar from the Department of Civil Engineering at Jamia Millia Islamia submitted a report on the design of a sewerage system. The report reviewed the existing sewerage system criteria, designed a new sewerage system, and estimated the costs according to the Delhi Schedule of Rates from 2012. It described the purpose of a sewerage system, different sewer types, sewer appurtenances, design considerations and parameters, hydraulic design of sewer lines from manhole to manhole, and a cost estimate breakdown of the new sewerage system project.
NOTE:Download this file to preview as the Slideshare preview does not display it properly.
This is an introduction to Linear Programming and a few real world applications are included.
The document provides an outline of topics related to linear programming, including:
1) An introduction to linear programming models and examples of problems that can be solved using linear programming.
2) Developing linear programming models by determining objectives, constraints, and decision variables.
3) Graphical and simplex methods for solving linear programming problems.
4) Using a simplex tableau to iteratively solve a sample product mix problem to find the optimal solution.
The document discusses using simulation to model queuing problems with random numbers. It describes queuing systems as having arrivals, a waiting line, service, and departure components. A single queue-single service point queuing structure is examined, with first-come, first-served queue discipline and random inter-arrival and service times. An example problem simulates 10 customer arrivals at a retail store using random numbers to estimate average waiting time and server idle time percentage. The solution shows calculating arrival and service time probabilities, simulating customer service, and finding total 4 minutes of waiting time and 12 minutes of idle time over 53 minutes.
This presentation is an attempt to introduce Game Theory in one session. It's suitable for undergraduates. In practice, it's best used as a taster since only a portion of the material can be covered in an hour - topics can be chosen according to the interests of the class.
The main reference source used was 'Games, Theory and Applications' by L.C.Thomas. Further notes available at: http://bit.ly/nW6ULD
This document summarizes a webinar presentation about adaptive sample size re-estimation for confirmatory time-to-event trials. The presentation discusses a motivating lung cancer trial example and introduces a promising zone design where the sample size is increased only if interim results fall within a promising zone. It demonstrates the design, simulation, and interim monitoring capabilities of East®SurvAdapt software. Key aspects of the adaptive design methodology are discussed, including conditional power calculations, maintaining type 1 error control, and balancing sample size increases with trial duration.
This document discusses unit commitment problems in power systems. It provides motivation for unit commitment by explaining constraints like ramping limits, startup costs, and minimum run times that go beyond economic dispatch. It then describes the basic formulation of static unit commitment as a mixed-integer program with binary variables indicating if a unit is on or off. The document provides an example application and discusses techniques for solving large-scale unit commitment problems, including branch and bound, dynamic programming, and Lagrangian relaxation.
Computational Intelligence for Time Series PredictionGianluca Bontempi
This document provides an overview of computational intelligence methods for time series prediction. It begins with introductions to time series analysis and machine learning approaches for prediction. Specific models discussed include autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) processes. Parameter estimation techniques for AR models are also covered. The document outlines applications in areas like forecasting, wireless sensors, and biomedicine and concludes with perspectives on future directions.
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
This document provides an overview of univariate time series modeling and forecasting. It defines concepts such as stationary and non-stationary processes. It describes autoregressive (AR) and moving average (MA) models, including their properties and estimation. It also discusses testing for autocorrelation and stationarity. The key models covered are AR(p) where the current value depends on p past lags, and MA(q) where the error term depends on q past error terms. Wold's decomposition theorem states that any stationary time series can be represented as the sum of deterministic and stochastic components.
QX Simulator and quantum programming - 2020-04-28Aritra Sarkar
This document discusses quantum computing simulation and quantum programming. It notes that directly simulating large quantum systems requires exponential resources, but that smart simulation techniques can reduce these requirements. It introduces the QX quantum computing simulator, including its syntax, functionality for noisy circuits, classical control, and parallelism. The document provides examples of simulating simple circuits and algorithms to demonstrate the QX simulator's capabilities.
Matt Purkeypile's Doctoral Dissertation Defense Slidesmpurkeypile
This document summarizes a doctoral dissertation defense presentation on Cove, a practical quantum computer programming framework. The presentation introduces quantum computing concepts, provides a simple example of Shor's factoring algorithm, discusses challenges with programming quantum computers, and outlines Cove's object-oriented approach which aims to address usability issues with existing solutions by programming against interfaces rather than specific implementations. Cove includes a simulated quantum computer for executing code and provides extensibility, documentation, and handles classical computation through the host language (C#).
This document summarizes a master's thesis that implemented a continuous sequential importance resampling (CSIR) algorithm to estimate predictive densities in stochastic volatility (SV) models. The thesis began with an introduction to relevant econometrics concepts. It then explained SV models and particle filtering approaches. The thesis described implementing and testing functions to develop an R package for CSIR estimation in SV models. Diagnostics and parameter estimates from simulated and real stock return data were reported. The thesis concluded by discussing the package's applications and potential for future development.
Variational quantum gate optimization on superconducting qubit systemHeyaKentaro
This document proposes and experimentally demonstrates a variational quantum gate optimization method on a superconducting qubit system. The method uses a parameterized quantum circuit ansatz, a specialized optimizer, and input-output constraints to achieve fast convergence. Experiments applying this method to generate ZXR gates on a four-qubit superconducting chip achieve fidelities close to the coherence limit, demonstrating improved performance over conventional optimization techniques.
The document provides an overview of Box-Jenkins (ARIMA) methodology for time series modeling and forecasting. It discusses autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) models. It also covers model identification using autocorrelation (ACF) and partial autocorrelation (PACF) functions, as well as model estimation, checking, selection and forecasting. Examples are provided to illustrate the methodology.
Compressed learning for time series classification學翰 施
This document proposes a compressed learning framework for time series classification using sparse envelope representations. It introduces compressed sensing concepts and describes creating a sparse envelope for time series by thresholding around the mean and standard deviation. A classification framework is developed using linear SVMs in the compressed domain. Experimental results on benchmark datasets demonstrate effectiveness of the envelope representations compared to state-of-the-art methods, as well as efficiency gains from compression. Real-world case studies on smart home applications show promising identification performance from envelope-based classifiers on sensor time series data.
Quantum computing startup IQM aims to come up with more efficient battery and material designs. This is the 20-slide pitch deck that landed it $128 million in funding.
Plus Slide Backup I: Dilution Refrigerator from Maybell Quantum and Backup II: IQM technical slide
Ilab Metis works on optimizing energy policies through power system simulation and modeling. It uses principles of classical unit commitment modeling, but also accounts for reserves, recourse actions, and network constraints. While this basic model can determine short-term operations, it does not adequately address uncertainties like variable hydro inflows. More advanced techniques model the problem as a multi-stage decision process or reinforcement learning problem to compute optimal policies over long time horizons under uncertainty. Future work may integrate real-time control, game-theoretic approaches, or bilevel optimization to better represent the complex, dynamic nature of modern power systems.
The document presents an overview of model predictive control (MPC) techniques for controlling a water distribution network. It discusses requirements for the MPC including constructing mass-balance models, defining risk-sensitive cost functions, and developing stochastic models. The MPC problem is formulated as an optimization problem that minimizes a cost function subject to constraints. Several solution approaches are outlined, including hierarchical MPC, model reduction, Newton methods, and decomposition methods.
1) Stochastic processes are sequences of random variables indexed by time that evolve randomly over time. The value at each time Xt may depend on previous values.
2) Stochastic processes are characterized by their probability distributions and moments like mean, variance, covariance over time. Stationary processes have these moments unchanged over time.
3) Autocovariance and autocorrelation functions describe the covariance and correlation between values at different times and are important tools for analyzing stationary processes.
Probabilistic Control of Uncertain Linear Systems Using Stochastic ReachabilityLeo Asselborn
This presentation proposes an approach to algorithmically synthesize control strategies for
set-to-set transitions of discrete-time uncertain systems based on reachable set computations in
a stochastic setting. For given Gaussian distributions of the initial states and disturbances, state
sets wich are reachable to a chosen confidence level under the effect of time-variant control laws
are computed by using principles of the ellipsoidal calculus. The proposed algorithm iterates over
LMI-constrained semi-definite programming problems to compute probabilistically stabilizing
controllers, while ellipsoidal input constraints are considered. An example for illustration is included.
1) The document proposes a cardinality-constrained k-means clustering approach to address practical challenges with standard k-means, such as skewed clustering and sensitivity to outliers.
2) It formulates the problem as a mixed integer nonlinear program (MINLP) and provides a convex relaxation to the problem using semidefinite programming (SDP).
3) The approach provides optimality guarantees and a rounding algorithm to recover an integer feasible solution. Numerical experiments demonstrate competitive performance versus heuristics.
The document discusses numerical methods for solving mathematical problems. It begins by defining numerical methods as algorithms used to obtain numerical solutions when an analytical solution does not exist or is difficult to obtain. It then provides the Navier-Stokes equations as an example of equations that require numerical methods to solve. The document concludes by discussing the importance of numerical accuracy and computation time for numerical methods.
This document describes computational methods for modeling nanoscale biosensors. It discusses using classical beam theory to model 1D carbon nanotube sensors and derive equations relating frequency shift to added mass. The static deformation approximation is used, assuming the nanotube deflects a fixed amount under the attached mass. Analytical expressions are derived and validated against finite element models. Linear and cubic approximations relate frequency shift and mass added.
Bayesian Experimental Design for Stochastic Kinetic ModelsColin Gillespie
In recent years, the use of the Bayesian paradigm for estimating the optimal experimental design has increased. However, standard techniques are
computationally intensive for even relatively small stochastic kinetic models. One solution to this problem is to couple cloud computing with a model emulator.
By running simulations simultaneously in the cloud, the large design space can be explored. A Gaussian process is then fitted to this output, enabling the
optimal design parameters to be estimated.
Similar to Tanfani testi-alvarez presentation final (20)
This document provides a status update on ServiceOntario's implementation of recommendations from a 2013 audit. It finds that ServiceOntario has fully implemented 9 of 21 recommendations aimed at improving service delivery and reducing costs. These include reducing the number of in-person service centres, implementing more efficient staffing mixes, and expanding less costly privately-run centres. Progress has been made on 6 more recommendations, while little progress was made on 3 recommendations and 2 will not be implemented. The status of each recommendation is detailed in the document.
El documento argumenta que el carbón es la mejor solución a los problemas de abastecimiento energético de Chile. Señala que el gas natural ya no será una opción viable a largo plazo debido a problemas de suministro con Argentina y Bolivia. Otras alternativas como el GNL, la energía nuclear e hidroeléctrica presentan riesgos significativos. En cambio, el carbón tiene reservas abundantes a nivel mundial, puede ser transportado de manera segura, y Chile posee grandes reservas de carbón magallánico que podrían satisfacer las neces
This document describes a corporate presentation for Mine Simulator 3.0 software. It discusses typical problems in mining operations like production variability and congestion. It introduces the software as a way to simulate mining operations and optimize resources and production through modeling. The software allows testing scenarios to determine optimal fleet sizes and allocations without disrupting real operations. It generates detailed statistics on production, routes, queues, and more to evaluate performance.
Prediction of the time to complete a series of surgical cases to avoid OR ove...Rene Alvarez
This study aimed to develop a methodology to accurately predict the time needed to complete a series of surgical cases in order to avoid overutilization of operating room time. The researchers analyzed data on 6,090 cardiac surgeries performed between 2004-2009. They fitted lognormal distributions to surgical times and developed a method based on the Fenton-Wilkinson approximation to estimate the total time of a scheduled series of cases. When tested on 95 actual schedules over 3 months in 2009, the methodology accurately predicted the risk of overtime in most cases and helped minimize overutilization of operating room time.
Process improvement and change managementRene Alvarez
1) The document discusses issues with dealing with "real world" problems and resistance to change. It advocates using a logical methodology to identify root causes and achieve agreement on problems and solutions.
2) A key part of the methodology is using cause-effect diagrams to achieve agreement on the problem by identifying root causes. Another part is brainstorming to achieve agreement on the direction of the solution.
3) With agreements in place, a detailed solution can be designed through approaches like analyzing current processes, considering new process models, and evaluating alternative solutions. The goal is an improved process that reduces waste.
This document discusses hospital capacity management and different models used. It proposes a more comprehensive simulation model that considers random patient lengths of stay, bed availability, and patient flows to better predict capacity needs. Such a model could improve long and medium-term planning by predicting occupancy levels and evaluating different scenarios. Building an accurate simulation model requires collecting and analyzing hospital-specific data.
1. A TWO-LEVEL RESOLUTION APPROACH FOR
THE STOCHASTIC OR PLANNING PROBLEM
Elena Tànfani, Angela Testi
Department of Economics and Quantitative Methods (DIEM)
University of Genova (Italy)
Rene Alvarez
Centre for Research in Healthcare Engineering
Department of Mechanical and Industrial Engineering
University of Toronto (Canada)
ORAHS 2010 – Genova, Italy
2. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem and individual chance constraints
• Robust solutions & safety slacks
• log normal case
• Montecarlo simulation
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
3. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem
• Robust solutions & safety slacks
• log normal case
• Montecarlo simulation
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
4. Problem addressed
• We deal with the Operating Rooms (ORs) planning
problem
• We focus our attention on hospital surgery
departments made up of:
• n surgical specialties
• m ORs
• a given planning horizon (usually a week)
ORAHS 2010 – Genova, Italy
5. Assumptions
1. Demand greater than capacity in the planning
horizon
2. Block scheduling system
3. Block times could not be split among specialties
4. Emergency patients use dedicated urgent surgery
rooms
ORAHS 2010 – Genova, Italy
6. Operating Rooms Planning & Scheduling
CMPP (Case Mix Planning Problem)
Available OR capacity (block times) should be split among different surgical specialties
MSSP (Master Surgical Schedule Problem)
Each specialty is assigned to a particular block time during the planning horizon
SCAP (Surgical Case Assignment Problem)
Sub-sets of patients are assigned to each block time and sequenced
ORAHS 2010 – Genova, Italy
7. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem
• Robust solutions & safety slacks
• log normal case
• Montecarlo simulation
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
8. CMMP model
• Q: How many block times to each specialty?
• The solution to the CMPP is determined by means of a
Mini-Max programming model
• Objective: leveling the resulting weighted waiting list of the
specialties belonging to the department
• Point of view: hospital
• The solution to the CMPP, is used as input of the MSSP
and SCAP model (demand constraints)
ORAHS 2010 – Genova, Italy
9. A Mini-Max Model for the CMPP (1)
Minimize max{( hw − sw y w ) β w }
w∈W
∑y
w
w =Q
yw ≥ l w ∀w
yw ≤ u w ∀w
yw ≥ yw+1 ∀w
y w ≥ 0, int
Where
yw are the integer variables (#of block times assigned to specialty w )
hw is the waiting list length of specialty w
sw is the average service rate of specialty w
βw is the average urgency coefficient of patients belonging to specialty w
Q total number of block times available in the planning horizon
lw, uw lower and upper bound on the number of block times to specialty w
ORAHS 2010 – Genova, Italy
10. A Mini-Max MIP Model for the CMPP (2)
Minimize σ
∑y
w
w =Q
yw ≥ l w ∀w
yw ≤ u w ∀w
yw ≥ yw+1 ∀w
y w ≥ 0, int
σ w = ( hw − sw yw ) βw ∀w
σ w ≤ σ ∀w
σ w ,σ ≥ 0
ORAHS 2010 – Genova, Italy
11. MMSP & SCAP model
• Q: Which day of the week are block times assigned
to each sub-specialty?
• Q: Which patients are assigned to each block time?
• The MSSP & SCAP model is formulated as a Chance
constrained stochastic model
• Objective: minimizing the weighted waiting time of admitted and
still waiting patients
• Point of view: patients
ORAHS 2010 – Genova, Italy
12. A 0-1 programming model for MSSP&SCAP
• Variables:
⎧1 if patient i is assigned to OR k on day t
xikt = ⎨
⎩0 otherwise
⎧1 if specialty w is assigned to OR k on day t
y wkt =⎨
⎩0 otherwise
ORAHS 2010 – Genova, Italy
13. MSSP&SCAP: Deterministic version
n c b n
Min ∑∑∑ x ikt (t + d i ) β i + ∑ [(1 − xikt )( b + 1 + d i ) β i ]
i =1 k =1 t =1 i =1
c b
∑∑ x
k =1 t =1
ikt ≤ 1 ∀i = 1, 2,...,n
c
∑∑ ∑ xikt = 0 ∀h = 1, 2,...,5
i∈I h k =1 t∈Th
∑x
i∈I w
ikt − Pywkt ≤ 0 ∀k = 1, 2,...,c;∀t = 1, 2,...,b;∀w = 1, 2,...,m
m
∑y
w=1
wkt =1 ∀k = 1, 2,...,c; ∀t = 1, 2,...,b
c
∑y
k =1
wkt ≤ ewt ∀t = 1, 2,...,b; ∀w = 1, 2,...,m
c b
∑∑ ywkt ≤ yw ∀w = 1, 2,...,m; (Yw= solution of CMPP)
k =1 t =1
n
∑xikt pi ≤ qkt ∀k = 1, 2,...,c; ∀t = 1, 2,...,b
i=1
x ikt ∈ {0,1} y wkt ∈ {0,1}
ORAHS 2010 – Genova, Italy
14. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem
• Robust solutions & safety slacks
• log normal case
• Montecarlo simulation
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
15. Deterministic versus stochastic model
• The solutions of the deterministic model are feasible with
respect to each OR block length constraints, i.e. no
overtime will occur
• What will happen if random durations are introduced?
ORAHS 2010 – Genova, Italy
16. Deterministic versus stochastic model
n
∑x
i =1
p ≤ qkt
ikt i ∀k = 1, 2,...,c;∀t = 1, 2,...,b
n
∑x
i =1
ξ ≤ qkt
ikt i ∀k = 1, 2,...,c; ∀t = 1, 2,...,b
where
qkt Is the length of OR block k in day t
pi Is the Expected Operating Time (EOT) of patient i
ξi Is the stochastic operating time of each patient i with a
mean expected duration μi and standard deviation σi
ORAHS 2010 – Genova, Italy
17. The stochastic problem
• The aim is to make decisions feasible with an high
probability level
⎛n ⎞
Ρ⎜ ∑xiktξi ≤ qkt ⎟ ≥ (1 − p*)
⎝ i=1 ⎠
p* = allowable overtime probability for each block
ORAHS 2010 – Genova, Italy
18. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem
• Robust solutions & safety slacks
• log normal case
• Montecarlo simulation
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
19. For those bloks where the probability of overtime is
greater than (1‐p*), we calculate safety slacks to be
used in a new run of the DETERMINISTIC MODEL
Deterministic model Slack time
(BASE SOLUTION) ITERATION (STOCHASTIC SOLUTION)
STOP CRITERIUM – ROBUST SOLUTION
ORAHS 2010 – Genova, Italy
20. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem
• Robust solutions & safety slacks
• log normal case
• Montecarlo simulation
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
22. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem
• Robust solutions & safety slacks
• log normal case
• Montecarlo simulation
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
23. Montecarlo simulation (other distributions)
1. Randomly generate N samples of operating time for each patient i :
εi1, εi2, … εin,…, εiN
2. Using the xikt values of the first level deterministic problem, compute
the duration of the schedule for each block time kt and simulation
run r
n
Lkt = ∑ ε ir xikt
r
i =1
3. Calculate the (1-α) percentile point of the series for each block time
kt
4. If this (1-α) percentile point is greater than qkt, a safety slack time
δkt is calculated
μψ kt + z1−α σψ kt
e − qkt = δ kt
ORAHS 2010 – Genova, Italy
24. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem
• Robust solutions & safety slacks
• (log normal case)
• Montecarlo simulation (others)
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
25. Case study
• Data from Surgery Department, San Martino Public
Hospital, Genova
• 6 Surgical subspecialties (SS)
• # teams available varying between 0 and 3
• Lower and upper bound varying between 2 and 12
• 400 patients on the waiting lists
• 6 ORs
• 6 Hours - block length
• 5 Days planning horizon
• Q=30 block times
ORAHS 2010 – Genova, Italy
26. Case study
Surgery Department, San Martino Public Hospital, Genova
ORAHS 2010 – Genova, Italy
27. Operating times distributions
• Operating times are EOT Mean Standard
assumed to follow Group Deviation
LOGNORMAL distributions
1 1.5 0.3
2 2.0 0.5
3 2.5 0.7
4 3.0 0.9
5 3.5 1.0
6 4.0 1.1
ORAHS 2010 – Genova, Italy
30. Robust final solution
Monday Thursday Wednesday Thursday Friday 75 patients
SS 3
1. xxx [A1-2-3]
SS 5
1. xxx [A2-1-2]
SS 1
1. xxx [A1-2-3]
SS 1
1. xxx [A2-2-3]
SS 5
1. xxx [A1-6-3]
have been
OR1 2. xxx [A1-6-2] 2. xxx [A2-6-2]
3. xxx [A2-2-1.5]
2. xxx [A2-2-2] 2. xxx [A2-2-2.5] 2. xxx [B-6-2.5] scheduled.
SS 2 SS 3 SS 2 SS 1 SS 1
OR2
1. xxx [A1-1-3]
2. xxx [A1-6-1.5]
1. xxx [A2-6-2]
2. xxx [A2-3-1.5]
1. xxx [A2-2-2]
2. xxx [A2-6-2]
1. xxx [A2-6-2]
2. xxx [B-1-1.5]
1. xxx [A2-6-3]
2. xxx [B-6-1.5]
11 patients
3. xxx [A2-6-1.5] 3. xxx [B-2-1.5] 3. xxx [B-2-1.5] less than the
SS 2
1. xxx [A1-6-4]
SS 1
1. xxx [A1-2-3]
SS 1
1. xxx [A2-1-1.5]
SS 2
1. xxx [A2-6-2.5]
SS 2
1. xxx [A2-1-3.5]
starting base
OR3 2. xxx [A2-5-1.5] 2. xxx [A2-4-1.5] 2. xxx [A2-2-1.5]
3. xxx [A2-6-1.5]
2. xxx [B-2-1.5]
3. xxx [B-2-1.5]
2. xxx [B-1-1.5] solution.
SS 6 SS 2 SS 3 SS 3 SS 1
1. xxx [A1-6-3] 1. xxx [A2-2-2] 1. xxx [A2-2-2] 1. xxx [A2-2-2.5] 1. xxx [A2-6-2.5]
OR4 2. xxx [A2-6-2] 2. xxx [A2-2-1.5] 2. xxx [B-6-1.5] 2. xxx [A2-1-1.5] 2. xxx [B-6-1.5]
3. xxx [A2-2-1.5] 3. xxx [B-6-1.5] 3. xxx [A2-2-1.5] 3. xxx [B-6-1.5]
SS 4 SS 1 SS 4 SS 2 SS 3
1. xxx [A1-6-3] 1. xxx [A1-6-2] 1. xxx [A2-6-2] 1. xxx [A2-2-2.5] 1. xxx [A2-6-3]
OR5 2. xxx [A2-1-2] 2. xxx [A2-6-2] 2. xxx [A2-6-1.5] 2. xxx [A2-6-2.5] 2. xxx [A2-6-2.5]
3. xxx [A2-2-1.5] 3. xxx [B-2-1.5]
SS 1 SS 2 SS 6 SS 1 SS 1
1. xxx [A1-6-3] 1. xxx [A1-2-2] 1. xxx [A2-3-2.5] 1. xxx [A2-1-2] 1. xxx [A2-1-3]
OR6 2. xxx [A1-1-1.5] 2. xxx [A2-3-1.5] 2. xxx [A2-2-1.5] 2. xxx [A2-6-1.5] 2. xxx [B-1-2]
3. xxx [A2-6-1.5] 3. xxx [B-2-1.5] 3. xxx [B-2-1.5]
ORAHS 2010 – Genova, Italy
31. Outline
• The problem addressed
• Modelling approach:
• First level deterministic model
• Second level stochastic problem
• Robust solutions & safety slacks
• (log normal case)
• Montecarlo simulation (others)
• Application to a case study: preliminary results
• Conclusions and further work
ORAHS 2010 – Genova, Italy
32. Conclusions
• When assuming random surgery times, an optimal
deterministic solution could potentially violate the capacity
constraints of the problem
• Our proposal is to improve the deterministic optimal
solution in a way that the probability of having overtime is
minimized by including safety slack times for each OR block
combination
• The proposed framework can, therefore, satisfy the main
concern of healthcare managers and be more easily
accepted with respect to a deterministic solution that
doesn’t verify the effect of realization of random variables
pertaining surgery duration
ORAHS 2010 – Genova, Italy
33. Future work
• It is necessary to perform an extensive
computational experimentation aimed at showing
the model convergence under different real life
instances
• Some future work could test local search
metaheuristic approaches to find deterministic
solutions to be used in the second level stochastic
model
ORAHS 2010 – Genova, Italy
34. A TWO-LEVEL RESOLUTION APPROACH FOR
THE STOCHASTIC OR PLANNING PROBLEM
Elena Tànfani, Angela Testi
Department of Economics and Quantitative Methods (DIEM)
University of Genova (Italy)
Rene Alvarez
Centre for Research in Healthcare Engineering
Department of Mechanical and Industrial Engineering
University of Toronto (Canada)
ORAHS 2010 – Genova, Italy