This document discusses using simulation modeling to analyze the impact of interdependencies between key departments in a hospital system, including the emergency department (ED), intensive care unit (ICU), operating rooms (OR), and nursing units. It summarizes how modeling each department individually can identify factors influencing performance, such as patient length of stay in the ED and scheduling of elective surgeries in the ICU. The document also provides examples of operational performance criteria used to evaluate the OR and potential simulation models analyzing the impact of changes like adding OR capacity.
This document discusses using management engineering principles to analyze healthcare delivery systems. It provides an example analysis of a hospital system modeled as interdependent subsystems, including the emergency department, intensive care unit, operating rooms, and nursing units. Simulation of the mathematical model revealed important relationships between the subsystems that could inform management decisions. The conclusion advocates using objective data analysis and simulation rather than subjective opinions alone for healthcare management decisions.
This document discusses using an artificial neural network (ANN) model to optimize a machining process. It begins with introductions to design of experiments (DOE) and ANN. It reviews literature on using ANN and response surface methodology to model, predict, and optimize cutting forces, surface finish, and temperature in turning waspaloy metal. It also discusses using ANN to predict and control surface roughness in CNC lathe turning of steel and brass. The document proposes using DOE and ANN to optimize an unspecified machining process by selecting machine, material, and cutting parameters, but provides no other details of the proposed experimental plan.
The document discusses measuring the influence of units in two-phase sampling designs. It begins by defining influential units as those with large design weights or values. While a good sampling design can minimize their impact, influential units may still be selected. The double expansion estimator is an unbiased estimator for estimating population totals, but influential units can increase its variance. The document explores measuring a unit's influence through its conditional bias and constructing robust estimators to reduce the impact of influential units. It considers the influence of units that are sampled in one or both phases.
Business Bankruptcy Prediction Based on Survival Analysis Approachijcsit
This document discusses business bankruptcy prediction models using survival analysis. It analyzes companies listed on the Taiwan Stock Exchange from 2003 to 2009. The study uses the Cox proportional hazards model to identify key financial ratios that predict business failure. The model includes profitability, leverage, efficiency, and valuation ratios as predictors. The accuracy of the proposed survival analysis model in classifying business failures is 87.93%. The document also discusses other statistical and machine learning techniques used for business bankruptcy prediction, such as logistic regression, neural networks, and hybrid models.
Combining forecast from different models has shown to perform better than single forecast in most time series. To improve the quality of forecast we can go for combining forecast. We study the effect of decomposing a series into multiple components and performing forecasts on each component separately... The original series is decomposed into trend, seasonality and an irregular component for each series. The statistical methods such as ARIMA, Holt-Winter have been used to forecast these components. In this paper we focus on how the best models of one series can be applied to similar frequency pattern series for forecasting using association mining. The proposed method forecasted value has been compared with Holt Winter method and shown that the results are better than Holt Winter method
The document discusses process orientation in information systems and digitalization. It argues that systems should focus on supporting the processes they enable rather than just managing data. Process-aware information systems that model processes explicitly across the life cycle from design to execution can improve digitalization if the processes are well represented. Process modeling standards like BPMN have evolved significantly since early notations from the 1970s.
Case–Based Reasoning is a problem solving paradigm that in many respects is fundamentally different from other major AI approaches. Instead of relying solely on general knowledge of a problem domain, or making associations along generalized relationships between problem descriptors and conclusions, CBR is able to utilize the specific knowledge of previously experienced, concrete problem situations (cases). A new problem is solved by finding a similar past case, and reusing it in the new problem situation. A second important difference is that CBR also is an approach to incremental, sustained learning, since a new experience is retained each time a problem has been solved, making it immediately available for future problems. The CBR field has grown rapidly over the last few years, as seen by its increased share of papers at major conferences, available commercial tools, and successful applications in daily use. A CBR tool should support the four main processes of CBR: retrieval, reuse, revision and retention.
The document discusses IBM's CICS Tools portfolio including CICS Interdependency Analyzer, CICS Configuration Manager, and CICS Performance Analyzer. It provides an overview of each tool's functionality and release history. It also lists resources for further information on the tools and related IBM products.
This document discusses using management engineering principles to analyze healthcare delivery systems. It provides an example analysis of a hospital system modeled as interdependent subsystems, including the emergency department, intensive care unit, operating rooms, and nursing units. Simulation of the mathematical model revealed important relationships between the subsystems that could inform management decisions. The conclusion advocates using objective data analysis and simulation rather than subjective opinions alone for healthcare management decisions.
This document discusses using an artificial neural network (ANN) model to optimize a machining process. It begins with introductions to design of experiments (DOE) and ANN. It reviews literature on using ANN and response surface methodology to model, predict, and optimize cutting forces, surface finish, and temperature in turning waspaloy metal. It also discusses using ANN to predict and control surface roughness in CNC lathe turning of steel and brass. The document proposes using DOE and ANN to optimize an unspecified machining process by selecting machine, material, and cutting parameters, but provides no other details of the proposed experimental plan.
The document discusses measuring the influence of units in two-phase sampling designs. It begins by defining influential units as those with large design weights or values. While a good sampling design can minimize their impact, influential units may still be selected. The double expansion estimator is an unbiased estimator for estimating population totals, but influential units can increase its variance. The document explores measuring a unit's influence through its conditional bias and constructing robust estimators to reduce the impact of influential units. It considers the influence of units that are sampled in one or both phases.
Business Bankruptcy Prediction Based on Survival Analysis Approachijcsit
This document discusses business bankruptcy prediction models using survival analysis. It analyzes companies listed on the Taiwan Stock Exchange from 2003 to 2009. The study uses the Cox proportional hazards model to identify key financial ratios that predict business failure. The model includes profitability, leverage, efficiency, and valuation ratios as predictors. The accuracy of the proposed survival analysis model in classifying business failures is 87.93%. The document also discusses other statistical and machine learning techniques used for business bankruptcy prediction, such as logistic regression, neural networks, and hybrid models.
Combining forecast from different models has shown to perform better than single forecast in most time series. To improve the quality of forecast we can go for combining forecast. We study the effect of decomposing a series into multiple components and performing forecasts on each component separately... The original series is decomposed into trend, seasonality and an irregular component for each series. The statistical methods such as ARIMA, Holt-Winter have been used to forecast these components. In this paper we focus on how the best models of one series can be applied to similar frequency pattern series for forecasting using association mining. The proposed method forecasted value has been compared with Holt Winter method and shown that the results are better than Holt Winter method
The document discusses process orientation in information systems and digitalization. It argues that systems should focus on supporting the processes they enable rather than just managing data. Process-aware information systems that model processes explicitly across the life cycle from design to execution can improve digitalization if the processes are well represented. Process modeling standards like BPMN have evolved significantly since early notations from the 1970s.
Case–Based Reasoning is a problem solving paradigm that in many respects is fundamentally different from other major AI approaches. Instead of relying solely on general knowledge of a problem domain, or making associations along generalized relationships between problem descriptors and conclusions, CBR is able to utilize the specific knowledge of previously experienced, concrete problem situations (cases). A new problem is solved by finding a similar past case, and reusing it in the new problem situation. A second important difference is that CBR also is an approach to incremental, sustained learning, since a new experience is retained each time a problem has been solved, making it immediately available for future problems. The CBR field has grown rapidly over the last few years, as seen by its increased share of papers at major conferences, available commercial tools, and successful applications in daily use. A CBR tool should support the four main processes of CBR: retrieval, reuse, revision and retention.
The document discusses IBM's CICS Tools portfolio including CICS Interdependency Analyzer, CICS Configuration Manager, and CICS Performance Analyzer. It provides an overview of each tool's functionality and release history. It also lists resources for further information on the tools and related IBM products.
The document discusses two EU projects, INMOTOS and SESMAG, that aim to improve interdependency management of critical infrastructures. INMOTOS developed modeling tools to assess risks to critical infrastructures and validate contingency plans by simulating scenarios. SESMAG aims to define security guidelines and requirements to increase the security and resilience of smart grids through risk assessment and development of an analysis support tool. Both projects seek to enhance coordination and preparedness for critical infrastructure protection across the EU.
Risk and Interdependency in Programs - G. RotterGerhard Rotter
The document discusses risk management and interdependencies in programs. It describes a global transformation program that implemented a new delivery model with standardized processes and tools. This was done to increase synergies and provide an end-to-end view of projects. The document highlights the differences between project and program risk management, with programs focusing on coordination across projects to optimize benefits. It emphasizes managing interdependencies and risks across the entire program life cycle. Recommendations include enforcing end-to-end process control, defining service level agreements, and automating processes to improve compliance and visibility.
The document discusses the different capabilities of Discovery & Dependency Mapping (DDM) Standard and Advanced options for HP Universal CMDB (UCMDB). DDM Advanced provides all-inclusive discovery of managed IT assets, applications, and dependencies. DDM Standard provides more limited discovery of managed data center assets and optional component-level configuration management database (CCM) discovery. Both DDM Standard and Advanced can discover hosts, applications, and network topology, while only DDM Advanced supports full customization and additional discovery of services, storage, mainframes and virtualization.
Managing Interdependencies in Complex OrganizationsNicolay Worren
Presentation held at the Organization Design Forum conference in the US, 2006.
For more on this and related topics, see my blog http://www.organizationdesign.net
Plants and animals depend on each other. (teach 2nd/3rd grade)Moira Whitehouse
This document discusses interdependency between living things. It explains that babies are dependent on others for their needs but the relationship is not interdependent. Animals depend on plants for food, oxygen, and shelter. Plants and animals create an interdependent relationship where plants provide oxygen and food/habitat for animals, and animals in turn provide carbon dioxide and help plants reproduce and disperse seeds. The relationship between plants and animals is one of interdependency.
This document discusses full virtualization techniques. It defines full virtualization as simulating hardware to allow any OS to run unmodified in a virtual machine. It describes the challenges of virtualizing the x86 architecture and how binary translation is used to allow guest OSes to run at a higher privilege level. The document outlines hosted and bare-metal virtualization architectures and their pros and cons. It provides examples of using full virtualization for desktop and server virtualization/cloud computing. It also gives steps to implement hosted full virtualization using Oracle VM VirtualBox on Windows 7.
How Application Discovery and Dependency Mapping can stop you from losing cus...ManageEngine
With ever shortening technology life cycles, change is not only constant but also quite frequent in today’s IT enterprise. But can your business keep up with such rapidly evolving IT? To stay on top of the change management game, you need to know exactly WHAT components constitute your IT setup, exactly WHERE each of them are, HOW they all are interconnected, and WHICH business service depends on each component. With application discovery and dependency mapping (ADDM), you can comprehensively map these interdependencies not only between the components themselves but between the components and the business services that rely on them as well.
To learn more about ADDM listen to Eveline Oehrlich, VP and research director (IT Infrastructure and Operations) of Forrester on our webinar, “How Application Discovery and Dependency Mapping can stop you from losing customers.” Learn:
- What ADDM is, its challenges, and the benefits of adopting this approach
- How you can make better business decisions and use ADDM to recover quickly from application downtime
Also, catch an exclusive preview of the upcoming ADDM feature in ManageEngine Applications Manager.
SHS ASQ 2010 Conference Presentation: Hospital System Patient FlowAlexander Kolker
The document discusses using systems engineering principles to improve healthcare delivery. It describes modeling a hospital as interconnected subsystems like the emergency department, intensive care unit, operating rooms, and medical units. The emergency department is analyzed in depth as a case study. A simulation model of patient flow through the emergency department is created to predict how limiting patient length of stay would reduce times when the emergency department must be closed to new patients due to capacity issues. The document advocates applying mathematical modeling and analysis to make more informed management decisions compared to traditional intuitive approaches.
This document discusses the history and applications of computer aided drug development (CADD). It begins with a brief history of how computers were first utilized in pharmaceutical research in the 1940s and have since become essential. It then discusses key topics in CADD including pharmacoinformatics, current applications like computer aided drug design, and the use of statistical modeling and parameters in pharmaceutical research. The document provides examples of descriptive and mechanistic modeling approaches and explains concepts like confidence regions, nonlinearity, sensitivity analysis, and population modeling.
Multimodal Ensemble Approach to Incorporate Various Types of Clinical Notes f...Jinho Choi
Electronic Health Records (EHRs) have been heavily used to predict various downstream clinical tasks such as readmission or mortality. One of the modalities in EHRs, clinical notes, has not been fully explored for these tasks due to its unstructured and inexplicable nature. Although recent advances in deep learning (DL) enables models to extract interpretable features from unstructured data, they often require a large amount of training data. However, many tasks in medical domains inherently consist of small sample data with lengthy documents; for a kidney transplant as an example, data from only a few thousand of patients are available and each patient's document consists of a couple of millions of words in major hospitals. Thus, complex DL methods cannot be applied to these kind of domains. In this paper, we present a comprehensive ensemble model using vector space modeling and topic modeling. Our proposed model is evaluated on the readmission task of kidney transplant patients, and improves 0.0211 in terms of c-statistics from the previous state- of-the-art approach using structured data, while typical DL methods fails to beat this approach. The proposed architecture provides the interpretable score for each feature from both modalities, structured and unstructured data, which is shown to be meaningful through a physician's evaluation.
Enabling Clinical Data Reuse with openEHR Data Warehouse EnvironmentsLuis Marco Ruiz
Databases for Clinical Information Systems are difficult to
design and implement, especially when the design should be
compliant with a formal specification or standard. The
openEHR specifications offer a very expressive and generic
model for clinical data structures, allowing semantic
interoperability and compatibility with other standards like
HL7 CDA, FHIR, and ASTM CCR. But openEHR is not only
for data modeling, it specifies an EHR Computational
Platform designed to create highly modifiable future-proof
EHR systems, and to support long term economically viable
projects, with a knowledge-oriented approach that is
independent from specific technologies. Software Developers
find a great complexity in designing openEHR compliant
databases since the specifications do not include any
guidelines in that area. The authors of this tutorial are
developers that had to overcome these challenges. This
tutorial will expose different requirements, design principles,
technologies, techniques and main challenges of implementing
an openEHR-based Clinical Database, with examples and
lessons learned to help designers and developers to overcome the challenges more easily
Enabling Clinical Data Reuse with openEHR Data Warehouse EnvironmentsLuis Marco Ruiz
Modern medicine needs methods to enable access to data,
captured during health care, for research, surveillance,
decision support and other reuse purposes. Initiatives like the
National Patient Centered Clinical Research Network in the
US and the Electronic Health Records for Clinical Research
in the EU are facilitating the reuse of Electronic Health
Record (EHR) data for clinical research. One of the barriers
for data reuse is the integration and interoperability of
different Healthcare Information Systems (HIS). The reason is
the differences among the HIS information and terminology
models. The use of EHR standards like openEHR can alleviate
these barriers providing a standard, unambiguous,
semantically enriched representation of clinical data to
enable semantic interoperability and data integration. Few
works have been published describing how to drive
proprietary data stored in EHRs into standard openEHR
repositories. This tutorial provides an overview of the key
concepts, tools and techniques necessary to implement an
openEHR-based Data Warehouse (DW) environment to reuse
clinical data. We aim to provide insights into data extraction
from proprietary sources, transformation into openEHR
compliant instances to populate a standard repository and
enable access to it using standard query languages and
services
ICU Patient Deterioration Prediction : A Data-Mining Approachcsandit
A huge amount of medical data is generated every da
y, which presents a challenge in analysing
these data. The obvious solution to this challenge
is to reduce the amount of data without
information loss. Dimension reduction is considered
the most popular approach for reducing
data size and also to reduce noise and redundancies
in data. In this paper, we investigate the
effect of feature selection in improving the predic
tion of patient deterioration in ICUs. We
consider lab tests as features. Thus, choosing a su
bset of features would mean choosing the
most important lab tests to perform. If the number
of tests can be reduced by identifying the
most important tests, then we could also identify t
he redundant tests. By omitting the redundant
tests, observation time could be reduced and early
treatment could be provided to avoid the risk.
Additionally, unnecessary monetary cost would be av
oided. Our approach uses state-of-the-art
feature selection for predicting ICU patient deteri
oration using the medical lab results. We
apply our technique on the publicly available MIMIC
-II database and show the effectiveness of
the feature selection. We also provide a detailed a
nalysis of the best features identified by our
approach.
ICU PATIENT DETERIORATION PREDICTION: A DATA-MINING APPROACHcscpconf
A huge amount of medical data is generated every day, which presents a challenge in analysing
these data. The obvious solution to this challenge is to reduce the amount of data without
information loss. Dimension reduction is considered the most popular approach for reducing
data size and also to reduce noise and redundancies in data. In this paper, we investigate the
effect of feature selection in improving the prediction of patient deterioration in ICUs. We
consider lab tests as features. Thus, choosing a subset of features would mean choosing the
most important lab tests to perform. If the number of tests can be reduced by identifying the
most important tests, then we could also identify the redundant tests. By omitting the redundant
tests, observation time could be reduced and early treatment could be provided to avoid the risk.
Additionally, unnecessary monetary cost would be avoided. Our approach uses state-of-the-art
feature selection for predicting ICU patient deterioration using the medical lab results. We
apply our technique on the publicly available MIMIC-II database and show the effectiveness of
the feature selection. We also provide a detailed analysis of the best features identified by our
approach.
Advanced Process Simulation Methodology To Plan Facility RenovationAlexander Kolker
This document summarizes a case study on using simulation modeling to plan for a surgical suite renovation at Children's Hospital of Wisconsin. The hospital needed to increase surgical capacity to meet growing demand. A project team used simulation to evaluate options for allocating operating rooms and beds across services. Their model found that separating gastroenterology and pulmonary services into their own area with 2-3 procedure rooms and 8-11 beds would best meet goals of minimizing wait times while staying within budget. The renovation is projected to increase patient satisfaction and yield a positive return on investment within 15 years. Ongoing simulation will evaluate the new process over time.
The document provides an overview of a workshop on applying system dynamics methods to understand complex adaptive systems in health. The workshop objectives are to introduce the complex adaptive systems framework, provide hands-on experience with system dynamics software, and discuss how system dynamics can be applied to research. The workshop outline includes introductions to complex adaptive systems and system dynamics, having participants build their own models, and a discussion session.
Usability evaluation of a discrete event based visual hospital management sim...hiij
Hospital Management is a complex and dynamic organisational challenge. Hospital managers (HMs)
are responsible for the effective use of valuable resources and assets, which is a significant issue in
healthcare. Due to factors such as the increase in health care costs and political pressure, HMs have
been compelled to examine new ways to improve efficiency and reduce healthcare delivery costs whilst
improving patient satisfaction. Healthcare managers require tools that will allow them to review the
current system or identify areas of improvement and quantify the possible changes.
This paper covers an evaluation of a hospital simulator developed by the authors. A usability test of the
simulator was carried out with hospital managers to provide real-world feedback on the simulator. This
has provided lessons to be applied in the development and use of such a tool. For instance, use of traffic
light colours in assisting management of hospital areas and Sensitivity Analysis supporting multiple or
more complex scenarios.
The document discusses a project to analyze and predict sepsis early using clinical data. It aims to predict sepsis 6 hours before clinical diagnosis to allow for earlier treatment. The author handles missing data and class imbalance in a large dataset. Features are engineered and selected. Decision trees and XGBoost models are used for prediction, achieving partial success. Further research is needed on time-series modeling, feature importance, and model performance with a domain expert.
The document discusses expert systems in artificial intelligence. It describes what an expert system is and its key components, including the knowledge base, inference engine, and user interface. The document provides examples of various expert systems such as MYCIN, DENDRAL, and Watson. It also discusses probability-based expert systems and provides an example of a medical diagnosis expert system.
Model guided therapy and the role of dicom in surgeryKlaus19
1. Model-guided therapy uses patient-specific models to complement image-guided therapy, bringing treatment closer to precise diagnosis, accurate prognosis assessment, and individualized planning and validation of therapy.
2. TIMMS is an IT system that facilitates model-guided therapy through interoperability of data, images, models, and tools to support the therapeutic intervention.
3. Patient-specific models in TIMMS must represent multidimensional and multiscale patient data, interface various system components, and link model components meaningfully while maintaining model accuracy over time.
Strategic Partnership of Healthcare and SE v.2.5.1Gary Smith
Systems engineering approaches can help address challenges in the complex and fragmented US healthcare system. The document outlines problems in the current healthcare system such as high costs, lack of access and integration. It argues that systems engineering principles of managing complexity, systems thinking, modeling and simulation can help improve efficiency, quality and outcomes. Examples are given of how systems engineering has been applied in healthcare settings through techniques like Lean, data analysis and risk management. The document promotes further collaboration between healthcare professionals and systems engineers to problem solve issues in the field.
Ensuring the feasibility of a $31 million OR expansion project: Capacity plan...SIMUL8 Corporation
Ensuring the feasibility of a $31 million OR expansion project: Capacity planning, system design, and patient flow
Presenter: Todd Roberts, Memorial Health System
The second workshop in our series will look at a recent project at Memorial Health System (MHS) in Illinois.
Todd Roberts, System Director of Operations Improvement at MHS will discuss and demonstrate the use of discrete simulation modeling to analyze floor design and throughput for a new Rapid Clinical Examination provider model for a 70,000 annual visit, Level I trauma center emergency department at a 507 bed, tertiary, urban, academic medical center and flow for all aspects of architectural design proposal for $31 million dollar operating room expansion project, including pre-op admission, transport to OR, OR time, and post-anesthesia care units (PACU) for admitted and outpatient surgery.
Through the use of discrete simulation modeling, Memorial has reduced length to stay for non-admitted patients in the emergency department by 27%, reduced percentage of patients leaving by without treatment by 50%, and released admit hold time by 37% while improving patient satisfaction from the 57th to 99th percentile (Press Ganey).
In addition, Memorial has used simulation to determine the appropriate facilities layout for its new OR expansion project, determining that optimizing the flow of traffic will lead to a reduction of 30 minutes per case in wasted movement and waiting.
The document discusses two EU projects, INMOTOS and SESMAG, that aim to improve interdependency management of critical infrastructures. INMOTOS developed modeling tools to assess risks to critical infrastructures and validate contingency plans by simulating scenarios. SESMAG aims to define security guidelines and requirements to increase the security and resilience of smart grids through risk assessment and development of an analysis support tool. Both projects seek to enhance coordination and preparedness for critical infrastructure protection across the EU.
Risk and Interdependency in Programs - G. RotterGerhard Rotter
The document discusses risk management and interdependencies in programs. It describes a global transformation program that implemented a new delivery model with standardized processes and tools. This was done to increase synergies and provide an end-to-end view of projects. The document highlights the differences between project and program risk management, with programs focusing on coordination across projects to optimize benefits. It emphasizes managing interdependencies and risks across the entire program life cycle. Recommendations include enforcing end-to-end process control, defining service level agreements, and automating processes to improve compliance and visibility.
The document discusses the different capabilities of Discovery & Dependency Mapping (DDM) Standard and Advanced options for HP Universal CMDB (UCMDB). DDM Advanced provides all-inclusive discovery of managed IT assets, applications, and dependencies. DDM Standard provides more limited discovery of managed data center assets and optional component-level configuration management database (CCM) discovery. Both DDM Standard and Advanced can discover hosts, applications, and network topology, while only DDM Advanced supports full customization and additional discovery of services, storage, mainframes and virtualization.
Managing Interdependencies in Complex OrganizationsNicolay Worren
Presentation held at the Organization Design Forum conference in the US, 2006.
For more on this and related topics, see my blog http://www.organizationdesign.net
Plants and animals depend on each other. (teach 2nd/3rd grade)Moira Whitehouse
This document discusses interdependency between living things. It explains that babies are dependent on others for their needs but the relationship is not interdependent. Animals depend on plants for food, oxygen, and shelter. Plants and animals create an interdependent relationship where plants provide oxygen and food/habitat for animals, and animals in turn provide carbon dioxide and help plants reproduce and disperse seeds. The relationship between plants and animals is one of interdependency.
This document discusses full virtualization techniques. It defines full virtualization as simulating hardware to allow any OS to run unmodified in a virtual machine. It describes the challenges of virtualizing the x86 architecture and how binary translation is used to allow guest OSes to run at a higher privilege level. The document outlines hosted and bare-metal virtualization architectures and their pros and cons. It provides examples of using full virtualization for desktop and server virtualization/cloud computing. It also gives steps to implement hosted full virtualization using Oracle VM VirtualBox on Windows 7.
How Application Discovery and Dependency Mapping can stop you from losing cus...ManageEngine
With ever shortening technology life cycles, change is not only constant but also quite frequent in today’s IT enterprise. But can your business keep up with such rapidly evolving IT? To stay on top of the change management game, you need to know exactly WHAT components constitute your IT setup, exactly WHERE each of them are, HOW they all are interconnected, and WHICH business service depends on each component. With application discovery and dependency mapping (ADDM), you can comprehensively map these interdependencies not only between the components themselves but between the components and the business services that rely on them as well.
To learn more about ADDM listen to Eveline Oehrlich, VP and research director (IT Infrastructure and Operations) of Forrester on our webinar, “How Application Discovery and Dependency Mapping can stop you from losing customers.” Learn:
- What ADDM is, its challenges, and the benefits of adopting this approach
- How you can make better business decisions and use ADDM to recover quickly from application downtime
Also, catch an exclusive preview of the upcoming ADDM feature in ManageEngine Applications Manager.
SHS ASQ 2010 Conference Presentation: Hospital System Patient FlowAlexander Kolker
The document discusses using systems engineering principles to improve healthcare delivery. It describes modeling a hospital as interconnected subsystems like the emergency department, intensive care unit, operating rooms, and medical units. The emergency department is analyzed in depth as a case study. A simulation model of patient flow through the emergency department is created to predict how limiting patient length of stay would reduce times when the emergency department must be closed to new patients due to capacity issues. The document advocates applying mathematical modeling and analysis to make more informed management decisions compared to traditional intuitive approaches.
This document discusses the history and applications of computer aided drug development (CADD). It begins with a brief history of how computers were first utilized in pharmaceutical research in the 1940s and have since become essential. It then discusses key topics in CADD including pharmacoinformatics, current applications like computer aided drug design, and the use of statistical modeling and parameters in pharmaceutical research. The document provides examples of descriptive and mechanistic modeling approaches and explains concepts like confidence regions, nonlinearity, sensitivity analysis, and population modeling.
Multimodal Ensemble Approach to Incorporate Various Types of Clinical Notes f...Jinho Choi
Electronic Health Records (EHRs) have been heavily used to predict various downstream clinical tasks such as readmission or mortality. One of the modalities in EHRs, clinical notes, has not been fully explored for these tasks due to its unstructured and inexplicable nature. Although recent advances in deep learning (DL) enables models to extract interpretable features from unstructured data, they often require a large amount of training data. However, many tasks in medical domains inherently consist of small sample data with lengthy documents; for a kidney transplant as an example, data from only a few thousand of patients are available and each patient's document consists of a couple of millions of words in major hospitals. Thus, complex DL methods cannot be applied to these kind of domains. In this paper, we present a comprehensive ensemble model using vector space modeling and topic modeling. Our proposed model is evaluated on the readmission task of kidney transplant patients, and improves 0.0211 in terms of c-statistics from the previous state- of-the-art approach using structured data, while typical DL methods fails to beat this approach. The proposed architecture provides the interpretable score for each feature from both modalities, structured and unstructured data, which is shown to be meaningful through a physician's evaluation.
Enabling Clinical Data Reuse with openEHR Data Warehouse EnvironmentsLuis Marco Ruiz
Databases for Clinical Information Systems are difficult to
design and implement, especially when the design should be
compliant with a formal specification or standard. The
openEHR specifications offer a very expressive and generic
model for clinical data structures, allowing semantic
interoperability and compatibility with other standards like
HL7 CDA, FHIR, and ASTM CCR. But openEHR is not only
for data modeling, it specifies an EHR Computational
Platform designed to create highly modifiable future-proof
EHR systems, and to support long term economically viable
projects, with a knowledge-oriented approach that is
independent from specific technologies. Software Developers
find a great complexity in designing openEHR compliant
databases since the specifications do not include any
guidelines in that area. The authors of this tutorial are
developers that had to overcome these challenges. This
tutorial will expose different requirements, design principles,
technologies, techniques and main challenges of implementing
an openEHR-based Clinical Database, with examples and
lessons learned to help designers and developers to overcome the challenges more easily
Enabling Clinical Data Reuse with openEHR Data Warehouse EnvironmentsLuis Marco Ruiz
Modern medicine needs methods to enable access to data,
captured during health care, for research, surveillance,
decision support and other reuse purposes. Initiatives like the
National Patient Centered Clinical Research Network in the
US and the Electronic Health Records for Clinical Research
in the EU are facilitating the reuse of Electronic Health
Record (EHR) data for clinical research. One of the barriers
for data reuse is the integration and interoperability of
different Healthcare Information Systems (HIS). The reason is
the differences among the HIS information and terminology
models. The use of EHR standards like openEHR can alleviate
these barriers providing a standard, unambiguous,
semantically enriched representation of clinical data to
enable semantic interoperability and data integration. Few
works have been published describing how to drive
proprietary data stored in EHRs into standard openEHR
repositories. This tutorial provides an overview of the key
concepts, tools and techniques necessary to implement an
openEHR-based Data Warehouse (DW) environment to reuse
clinical data. We aim to provide insights into data extraction
from proprietary sources, transformation into openEHR
compliant instances to populate a standard repository and
enable access to it using standard query languages and
services
ICU Patient Deterioration Prediction : A Data-Mining Approachcsandit
A huge amount of medical data is generated every da
y, which presents a challenge in analysing
these data. The obvious solution to this challenge
is to reduce the amount of data without
information loss. Dimension reduction is considered
the most popular approach for reducing
data size and also to reduce noise and redundancies
in data. In this paper, we investigate the
effect of feature selection in improving the predic
tion of patient deterioration in ICUs. We
consider lab tests as features. Thus, choosing a su
bset of features would mean choosing the
most important lab tests to perform. If the number
of tests can be reduced by identifying the
most important tests, then we could also identify t
he redundant tests. By omitting the redundant
tests, observation time could be reduced and early
treatment could be provided to avoid the risk.
Additionally, unnecessary monetary cost would be av
oided. Our approach uses state-of-the-art
feature selection for predicting ICU patient deteri
oration using the medical lab results. We
apply our technique on the publicly available MIMIC
-II database and show the effectiveness of
the feature selection. We also provide a detailed a
nalysis of the best features identified by our
approach.
ICU PATIENT DETERIORATION PREDICTION: A DATA-MINING APPROACHcscpconf
A huge amount of medical data is generated every day, which presents a challenge in analysing
these data. The obvious solution to this challenge is to reduce the amount of data without
information loss. Dimension reduction is considered the most popular approach for reducing
data size and also to reduce noise and redundancies in data. In this paper, we investigate the
effect of feature selection in improving the prediction of patient deterioration in ICUs. We
consider lab tests as features. Thus, choosing a subset of features would mean choosing the
most important lab tests to perform. If the number of tests can be reduced by identifying the
most important tests, then we could also identify the redundant tests. By omitting the redundant
tests, observation time could be reduced and early treatment could be provided to avoid the risk.
Additionally, unnecessary monetary cost would be avoided. Our approach uses state-of-the-art
feature selection for predicting ICU patient deterioration using the medical lab results. We
apply our technique on the publicly available MIMIC-II database and show the effectiveness of
the feature selection. We also provide a detailed analysis of the best features identified by our
approach.
Advanced Process Simulation Methodology To Plan Facility RenovationAlexander Kolker
This document summarizes a case study on using simulation modeling to plan for a surgical suite renovation at Children's Hospital of Wisconsin. The hospital needed to increase surgical capacity to meet growing demand. A project team used simulation to evaluate options for allocating operating rooms and beds across services. Their model found that separating gastroenterology and pulmonary services into their own area with 2-3 procedure rooms and 8-11 beds would best meet goals of minimizing wait times while staying within budget. The renovation is projected to increase patient satisfaction and yield a positive return on investment within 15 years. Ongoing simulation will evaluate the new process over time.
The document provides an overview of a workshop on applying system dynamics methods to understand complex adaptive systems in health. The workshop objectives are to introduce the complex adaptive systems framework, provide hands-on experience with system dynamics software, and discuss how system dynamics can be applied to research. The workshop outline includes introductions to complex adaptive systems and system dynamics, having participants build their own models, and a discussion session.
Usability evaluation of a discrete event based visual hospital management sim...hiij
Hospital Management is a complex and dynamic organisational challenge. Hospital managers (HMs)
are responsible for the effective use of valuable resources and assets, which is a significant issue in
healthcare. Due to factors such as the increase in health care costs and political pressure, HMs have
been compelled to examine new ways to improve efficiency and reduce healthcare delivery costs whilst
improving patient satisfaction. Healthcare managers require tools that will allow them to review the
current system or identify areas of improvement and quantify the possible changes.
This paper covers an evaluation of a hospital simulator developed by the authors. A usability test of the
simulator was carried out with hospital managers to provide real-world feedback on the simulator. This
has provided lessons to be applied in the development and use of such a tool. For instance, use of traffic
light colours in assisting management of hospital areas and Sensitivity Analysis supporting multiple or
more complex scenarios.
The document discusses a project to analyze and predict sepsis early using clinical data. It aims to predict sepsis 6 hours before clinical diagnosis to allow for earlier treatment. The author handles missing data and class imbalance in a large dataset. Features are engineered and selected. Decision trees and XGBoost models are used for prediction, achieving partial success. Further research is needed on time-series modeling, feature importance, and model performance with a domain expert.
The document discusses expert systems in artificial intelligence. It describes what an expert system is and its key components, including the knowledge base, inference engine, and user interface. The document provides examples of various expert systems such as MYCIN, DENDRAL, and Watson. It also discusses probability-based expert systems and provides an example of a medical diagnosis expert system.
Model guided therapy and the role of dicom in surgeryKlaus19
1. Model-guided therapy uses patient-specific models to complement image-guided therapy, bringing treatment closer to precise diagnosis, accurate prognosis assessment, and individualized planning and validation of therapy.
2. TIMMS is an IT system that facilitates model-guided therapy through interoperability of data, images, models, and tools to support the therapeutic intervention.
3. Patient-specific models in TIMMS must represent multidimensional and multiscale patient data, interface various system components, and link model components meaningfully while maintaining model accuracy over time.
Strategic Partnership of Healthcare and SE v.2.5.1Gary Smith
Systems engineering approaches can help address challenges in the complex and fragmented US healthcare system. The document outlines problems in the current healthcare system such as high costs, lack of access and integration. It argues that systems engineering principles of managing complexity, systems thinking, modeling and simulation can help improve efficiency, quality and outcomes. Examples are given of how systems engineering has been applied in healthcare settings through techniques like Lean, data analysis and risk management. The document promotes further collaboration between healthcare professionals and systems engineers to problem solve issues in the field.
Ensuring the feasibility of a $31 million OR expansion project: Capacity plan...SIMUL8 Corporation
Ensuring the feasibility of a $31 million OR expansion project: Capacity planning, system design, and patient flow
Presenter: Todd Roberts, Memorial Health System
The second workshop in our series will look at a recent project at Memorial Health System (MHS) in Illinois.
Todd Roberts, System Director of Operations Improvement at MHS will discuss and demonstrate the use of discrete simulation modeling to analyze floor design and throughput for a new Rapid Clinical Examination provider model for a 70,000 annual visit, Level I trauma center emergency department at a 507 bed, tertiary, urban, academic medical center and flow for all aspects of architectural design proposal for $31 million dollar operating room expansion project, including pre-op admission, transport to OR, OR time, and post-anesthesia care units (PACU) for admitted and outpatient surgery.
Through the use of discrete simulation modeling, Memorial has reduced length to stay for non-admitted patients in the emergency department by 27%, reduced percentage of patients leaving by without treatment by 50%, and released admit hold time by 37% while improving patient satisfaction from the 57th to 99th percentile (Press Ganey).
In addition, Memorial has used simulation to determine the appropriate facilities layout for its new OR expansion project, determining that optimizing the flow of traffic will lead to a reduction of 30 minutes per case in wasted movement and waiting.
Presented a fantastic cutting-edge paper about using deep learning approach to learn from EHR (electronic health record) data in a clinical informatics journal club at UAB, October 2018.
Low Complexity System Designs for Medical Cyber Physical Human SystemsMDPnP_UIUC
Prepare and inject drugs, assist with medical procedures
Head nurse: Record diagnosis, treatments and patient conditions
Physician in charge: Diagnose patient condition and order treatments
Medical devices: Monitor and display patient conditions
Code sheet: Record diagnosis, treatments and patient conditions
This model helps understand the workflow and information flow.
HEALTH PREDICTION ANALYSIS USING DATA MININGAshish Salve
As we know that health care industry is completely based on assumptions, which after get tested and verified via various tests and patient have to be depend on the doctors knowledge on that topic . so we made a system that uses data mining techniques to predict the health of a person based on various medical test results. so we can predict the health of that person based on that analysis performed by the system.The system currently design only for heart issues, for that we had used Statlog (Heart) Data Set from UCI Machine Learning Repository it includes attributes like age, sex, chest pain type, cholesterol, sugar, outcomes,etc.for training the system. we only need to passed few general inputs in order to generate the prediction and the prediction results from all algorithms are they merged together by calculating there mean value that value shows the actual outcome of the prediction process which entirely works in background
This document describes a patient management system project for a university. The system aims to automate a hospital's manual patient record keeping system. It will computerize patient, doctor, and hospital details to make record keeping more efficient. The system will allow scheduling appointments, tracking medical bills and patient rooms. It will generate reports on patient information and utilize databases to store records. Diagrams including data flow diagrams and entity-relationship diagrams are provided to illustrate the system's design and data structure.
A New Real Time Clinical Decision Support System Using Machine Learning for C...IRJET Journal
This document presents a new real-time clinical decision support system using machine learning for critical care units. The system aims to predict mean arterial pressure (MAP) status in real-time at the bedside without requiring an offline training phase. It uses a two-stage machine learning framework: Stage I uses hierarchical temporal memory and online learning to process data streams and make unsupervised predictions in real-time. Stage II is a long short-term memory classifier that predicts future MAP status based on Stage I predictions. The system is evaluated and found to outperform logistic regression models in terms of accuracy, recall, precision and area under the receiver operating characteristic curve.
Similar to Effect Of Interdependency On Hospital Wide Patient Flow (20)
The purpose of this presentation is providing an overview of the main approaches in using big data: data focus vs. business analytics focus. The following topics will be covered:
- Why getting data should not be a starting point in business analytics, and why more data not always result in more accurate predictions
- The simulation analytics methodology in comparison to machine learning and data science approach
- Examples of two business cases:
(i) Healthcare: Pediatric Triage in a Severe Pandemic-Maximizing Population Survival by Establishing Admission Thresholds
(ii) Banking & Finance: Analysis of the staffing and utilization of a team of mutual fund analysts for timely producing ‘buy-sell’ reports
Many resources discuss machine learning and data analytics from a technology deployment perspective. From the business standpoint, however, the real value of analytics is in the methodology for solving some systemic holistic problems, rather than a specific technology or platform.
In this presentation, the focus is shifted from the technology deployment to the analytics methodology for solving some holistic business problems. Two examples will be covered in detail:
(i) Analysis of the performance and the optimal staffing of a team of doctors, nurses, and technicians for a large local hospital unit using discrete event simulation with a live demonstration. This simulation methodology is not included in most Machine Learning algorithms libraries.
(ii) Identifying a few factors (or variables) that contribute most to the financial outcome of a local hospital using principal component decomposition (PCD) of the large observational dataset of population demographic and disease prevalence.
DEA is a technique that measures the efficiency of decision-making units (DMUs) that use multiple inputs to produce multiple outputs. It defines an efficiency score for each DMU as a weighted sum of outputs divided by a weighted sum of inputs, with all scores restricted to a range of 0 to 1. DEA calculates efficiency scores by choosing input/output weights that maximize each DMU's score, presenting it in the best possible light relative to its peers. Strengths of DEA include its ability to handle multiple inputs/outputs without assuming a functional form and directly compare DMUs against peers or combinations of peers.
This document describes a study conducted at Froedtert Hospital to develop a predictive model of emergency department operations and the effect of patient length of stay on ED diversion. The study analyzed patient length of stay data, developed an ED simulation model, and used the model to test scenarios with different upper limits on length of stay. The model predicted that ED diversion could be reduced to around 0.5% by limiting discharged patients' length of stay to 5 hours and admitted patients' length of stay to 6 hours.
This document describes using process modeling simulation to analyze the effect of daily leveling of elective surgeries on ICU diversion rates at a hospital. The simulation models the patient flow through different units like the ICU, OR, and ED. Currently, elective surgeries are scheduled without considering ICU capacity, leading to periods of high utilization and ICU diversion. The simulation analyzes scenarios where elective case limits are set each day, smoothing out utilization across days and reducing ICU diversion times. Initial results show imposing daily caps of 5 cases for one unit and 4 for another reduces scheduling variability by around 20-28% compared to the current practice.
This document provides an outline and overview of a course on healthcare administration and delivery systems. It discusses the following key points:
- The course will introduce quantitative decision-making methods in healthcare management and apply techniques like forecasting, optimization, and simulation to address challenges in the healthcare system.
- Traditional management has relied on intuition but incorporating quantitative methods can help address problems in a systematic way.
- The roles and responsibilities of healthcare managers have become more visible and important given issues around costs, access, and quality in the system.
- A background in both healthcare and business administration is valuable for medical and health services managers.
This document provides details about a graduate course on healthcare administration and delivery systems, including its objectives, topics, assignments, and evaluation criteria. The course uses lectures, discussions, and exercises to teach students how to apply quantitative techniques like forecasting, optimization, simulation, and analytics to decision-making in healthcare. The goal is to help students develop skills in using data-driven methods for planning, managing, and evaluating healthcare programs and organizations. The course meets weekly and includes a midterm and final exam that evaluate students' problem-solving abilities and understanding of operational challenges in healthcare settings.
This document discusses various frameworks for optimizing healthcare staffing levels with variable patient demand. It begins by outlining different approaches including the newsvendor framework, linear optimization, and discrete event simulation. The newsvendor framework is then explained in more detail, showing how to calculate optimal staffing levels by balancing the costs of over- and under-staffing based on historical demand data. Key points are that the optimal level may be higher or lower than the average depending on costs, and it provides a trade-off between having too many or too few nurses on staff at a given time.
The document discusses using discrete event simulation (DES) to analyze capacity and plan renovations for a hospital's surgical suite. It provides an example where DES was used to simulate different scenarios for renovating the Children's Hospital of Wisconsin's surgical facilities. The simulation analyzed patient wait times and resource needs under each scenario. The output recommended scenario 3 and reallocating beds to meet performance criteria for wait times.
The document discusses data science, data analytics, and their application in hospital operations management. It states that data science and analytics strive to transform raw data into actionable business decisions using quantitative methods. Various types of analytics are described like descriptive, predictive, and prescriptive analytics. Examples of applying different analytical methods to common business problems in healthcare are provided, such as using simulation for capacity planning and optimization for resource allocation. The key is integrating analytics into decision-making processes to create value for customers.
Primary care clinics-managing physician patient panelsAlexander Kolker
OUTLINE
• Traditional scheduling and the advanced
access at a primary care clinic
• Uncertainties that should be considered when
patients are scheduled
• Decisions that need to be made for designing an
appointment system
• Practice on using the panel size calculator
•Emerging Trends in Primary Care:
Staffing with variable demand in healthcare settingsAlexander Kolker
Outline
Main Concept and Some Definitions.
The “newsvendor” framework approach.
Staffing a nursing unit with variable census (demand)
Linear optimization framework approach.
Minimizing staffing cost subject to variable constraints
Discrete event simulation framework approach.
Staffing a unit with cross-trained staff
Key Points and Conclusions
Staffing Decision-Making Using Simulation ModelingAlexander Kolker
The use of Management Engineering methodology for
staffing decision-making.
• Part 1 - Quality and Cost: Outpatient Flu Clinic.
• Part 2 - Quality and Cost : Optimal PACU Nursing
Staffing.
• Summary of Fundamental Management Engineering
1) The Child Protection Center (CPC) evaluated children who may have been abused and aimed to reduce patient wait times which were perceived to be due to staff shortages.
2) A discrete event simulation model was developed to analyze current patient flow and identify bottlenecks. It found the sexual abuse exam room and medical assistants were causing most delays.
3) The best scenario found was adding 0.6 full-time equivalent medical assistant in the afternoon and changing the exam room configuration to one exam room and two sexual abuse exam rooms. This significantly reduced total patient wait times.
SHS_ASQ 2010 Conference: Poster The Use of Simulation for Surgical Expansion ...Alexander Kolker
Children's Hospital of Wisconsin is planning a major expansion and renovation of its surgical suite to increase capacity. Computer simulation models were developed to analyze three expansion scenarios and determine the optimal design. Model 3 was selected as the best option, as it would separate gastroenterology and pulmonary services into their own area with 2-3 procedure rooms and 8-11 pre/postoperative beds, while meeting all performance criteria for patient wait times and OR utilization through 2013. The simulations accounted for patient volume flow, limited system capacity, and the balance needed between these factors for efficient patient throughput.
Here is a high-level layout of the PACU simulation model:
- Inputs:
- Historical daily OR schedule with planned start/end times of surgeries
- Distributions of surgery durations
- Distributions of PACU length of stay for different surgery types
- Process:
- Simulate surgeries based on schedule and duration distributions
- Patients enter PACU after surgery based on OR schedule
- Patients spend time in PACU based on PACU length of stay distributions
- Patients discharge from PACU over time
- Outputs:
- PACU census (number of patients) tracked over time
- Staffing requirements calculated to maintain target nurse-to-patient ratios
The model simulates patient flows
Effect Of Interdependency On Hospital Wide Patient Flow
1. Effect of Interdependency of ED, ICU, OR and
Nursing Units on Hospital-Wide System
Patient Flow
Mayo Clinic Conference on Systems Engineering &
Operations Research in Health Care
Mayo School of Continuous Professional Development
August 19, 2010
Alexander Kolker, PhD
Operations Analysis Project Manager
Outcomes Department
Children’s Hospital and Health System
Milwaukee, Wisconsin
1
2. Objectives
• To demonstrate the power of the modern management engineering and
its foundation-the operations research-for quantitative analysis of
complex healthcare systems.
• To quantitatively illustrate the critical effect of subsystems’ interaction
on the entire system outcome.
• To summarize fundamental Management Engineering principles and
their use for managerial decision-making without a full-scale detailed
simulation analysis.
2
3. Outline
• Main concept and some definitions.
• Typical hospital system as a set of interdependent subsystems:
• Subsystem 1: Emergency Department (ED).
• Subsystem 2: Intensive Care Unit (ICU).
• Subsystem 3: Operating Rooms (OR)- Surgical Department.
• Subsystem 4: Medical/Surgical Nursing Units (Floor_NU).
• Interdependency of subsystems.
• Main take-away.
• Summary of fundamental management engineering principles.
3
4. This presentation is adapted from
the following System Engineering Publications
Kolker, A, Queuing Theory and Discreet Events Simulation for Healthcare: from Basic
Processes to Complex Systems with Interdependencies. Chapter 20. In: Handbook of
Research on Discrete Event Simulation: Technologies and Applications, 2009, pp. 443
- 483. IGI Global Publishing, Hershey, PA.
Kolker, A, Process Modeling of Emergency Department Patient Flow: Effect of Patient
Length of Stay on ED Diversion. Journal of Medical Systems, 2008, v. 32, N 5, pp. 389 -
401.
Kolker, A, Process Modeling of ICU Patient Flow: Effect of Daily Load Leveling of Elective
Surgeries on ICU Diversion. Journal of Medical Systems, 2009, v. 33, N 1, pp. 27 - 40.
Kolker, A, Norell, B., O’Connor, M., Hoffman, G., Oldham, K., The Use of Predictive
Simulation Modeling for Surgical Capacity Expansion Analysis. Presented at the 2010
SHS/ASQ Joint Conference, Atlanta, GA, February 26, 2010 (poster session).
Kolker, A, Efficient Managerial Decision Making in Healthcare Settings: Examples and
Fundamental Principles. Chapter 1. In: Management Engineering for Effective Healthcare
Delivery: Principles and Applications. Ed. A. Kolker, P. Story. IGI-Global Publishing,
2011.
4
5. Main Concept
• Modern medicine has achieved great progress in treating individual
patients. This progress is based mainly on hard science: molecular
genetics, biophysics, biochemistry, design and development of
medical devices, imaging, drugs.
• However relatively little resources have been devoted to the proper
functioning of overall healthcare delivery as an integrated system,
in which access to efficient care should be delivered to many
thousands of patients in an economically sustainable way. (Joint report
of National Academy of Engineering and Institute of Medicine, 2005).
A real impact on efficiency and sustainability of the healthcare
system can be achieved only by using healthcare delivery
engineering which is based on hard science such as: probability
theory, forecasting, calculus, stochastic optimization, computer
simulation, etc.
5
6. Some Definitions
What is Management?
Management is controlling and leveraging available resources (material,
financial and human) aimed at achieving the performance objectives.
Traditional (Intuitive) Management is based on
• Past experience.
• Intuition or educated guess.
• Static pictures or simple linear projections.
Linear projection assumes that the output is directly proportional to the
input, i.e. the more resources (material and human) thrown in, the more
output produced (and vice versa).
System output
Resource input
6
7. What is Management Engineering?
• Management Engineering (ME) is the discipline of
building and using validated mathematical models of
real systems to study their behavior aimed at making
justified business decisions.
• This field is also known as operations research.
Thus, Management Engineering is the application of
mathematical methods to system analysis and
decision-making.
7
8. Scientific Management is Based On
• A goal that is clearly stated and measurable, so the decision-maker
(manager) always knows if the goal is closer or farther away.
• Identification of available resources that can be leveraged (allocated) in
different ways.
• Development of mathematical models or numeric computer algorithms
to quantitatively test different decisions for the use of resources and
consequences of these decisions (especially unintended
consequences) before finalizing the decisions.
The Underlying Premise of ME is
• Decisions should be made that best lead to reaching the goal.
• Valid mathematical models lead to better justified decisions than an
educated guess, past experience, and linear extrapolations (traditional
decision-making).
8
9. Main Steps for System Engineering Analysis
Step 1
• Large systems are deconstructed into smaller subsystems
using natural breaks in the system.
• Subsystems are modeled, analyzed, and studied separately.
Step 2
• Subsystems are then reconnected in a way that recaptures
the interdependency between them.
• The entire system is re-analyzed using the output of one
subsystem as the input for another subsystem.
9
10. High-Level Layout of a Typical Hospital System
Key
ED – Emergency Room Floor NU – Med/Surg Units
ICU – Intensive Care Unit OR – Operating Rooms
WR – Waiting Room
10
11. Step 1
• Deconstruction of the entire hospital system into
Main Subsystems.
• Simulation and Analysis of the Main Subsystems:
Subsystem 1: Emergency Department (ED).
Subsystem 2: Intensive Care Unit (ICU).
Subsystem 3: Operating Rooms (OR).
Subsystem 4: Floor Nursing Units (NU).
11
12. Subsystem 1: Typical Emergency Department (ED)
The high-level layout of
the entire hospital system: ED structure and in-patient units
12
13. Typical ED Challenges
ED Performance Issues
• ED ambulance diversion is unacceptably high (about 23% of
time sample ED is closed to new patients).
• Among many factors that affect ED diversion, patient Length of
Stay in ED (LOS) is one of the most significant factors.
High Level ED Analysis Goal
• Quantitatively predict the relationship between patient LOS
and ED diversion.
• Identify the upper LOS limit (ULOS) that will result in
significant reduction or elimination ED diversion.
13
14. Typical ED Simulation Model Layout
Simulation
Digital clock
ED pre-filled at the
simulation start
Arrival pattern
wk, DOW, time
Mode of transp
Mode of Transportation
Disposition
14
15. Modeling Approach
• ED diversion (closure) is declared when ED patient census
reaches ED bed capacity.
• ED stays in diversion until some beds become available after
patients are moved out of ED (discharged home, expired, or
admitted as in-patients).
• Upper LOS limits (simulation parameters) are imposed on the
baseline original LOS distributions: A LOS higher than the
limiting value is not allowed in the simulation run.
Take Away
Baseline LOS distributions should be recalculated as
functions of the upper LOS limits.
15
16. Modeling Approach – continued
Given original distribution density and the limiting value of the random variable T, what is the conditional
distribution of the restricted random variable T?
Original unboundede H
D u no LO _h mdistribution
istrib tio f S o , rs New re-calculated no L S h m, H
R -ca la db u d dd u distribution
e lcu te o n e istrib tio f O _ o e rs
3 aramf ( a m orig
-P eter G m )
T a 500
480
5
4
00
80
460
f (T ) original
440 f (T , LOS ) new =
4
4
60
40 420 LOS
∫ f (T )
420 400
400 3
3
80
60
original dT
380
360 340
340 320
0
Frequency
320 300
Frequency
300 280
280 260
260 240
2
2
40
20
Imposed LOS limit 6 hrs 220
200
200
180
180
160
160
140
140
120
LOS limit
120
100 100
80 80
60 60
40 40 f (T ) new = 0, if T >LOS
20 20
0 0
0 2 4 6 8 10 12 0 2 4 6 8 10 12
L SH
O , rs LO H
L S rs
O,
T, Hrs T, Hrs
16
17. Simulation Summary and Model Validation
Scenario/option LOS for discharged LOS for Predicted ED Note
home NOT more than admitted NOT diversion, %
more than
Current, 07 24 hrs 24 hrs 23.7% Actual ED
(Baseline)
diversion
was 21.5%
1 5 hrs 6 hrs ~ 0.5 % Practically NO
Currently 17% Currently diversion
with LOS more 24% with
than 5 hrs; LOS more
than 6 hrs;
2 6 hrs 6 hrs ~ 2% Low single
digits
diversion
3 5 hrs 24 hrs ~4% Low single
digits
diversion
Take-away:
Take Away
• ED diversion could be negligible (~0.5%) if patients discharged home stay not more
than five hours and admitted patients stay not more than six hours.
• Relaxing of these LOS limits results in a low digits percent diversion that still could be
acceptable.
17
18. Simulation Summary – continued
What other combinations of upper LOS limits are possible to get a low single digit percent ED
diversion?
Perform full factorial DOE with two factors (ULOS_home and ULOS_adm) at six levels each using
simulated percent diversion as a response function.
S im u la te d D iv % a s a f u n c tio n o f u p p e r L O S lim its , h r s
U L O S _h o m e, h r s
2 4 .0
5
2 2 .5
6
2 1 .0 8
1 9 .5 10
Mean predicted Div %
1 8 .0 12
1 6 .5 Low single digits 24
1 5 .0 % diversion
1 3 .5
1 2 .0
1 0 .5
9 .0
7 .5
6 .0
4 .5
3 .0
1 .5
0 .0
5 6 8 10 12 24
UL O S _ a d m , h r s
18
19. Conclusions for Subsystem 1:
Emergency Department
• ED diversion can be negligible (less than 1%) if hospital-
admitted patients stay in ED not more than six hours.
• Currently 24% of hospital-admitted patients in study
hospital stay longer than this limit, up to 20 hours.
• This long LOS for a large percentage of patients results in
ED closure/diversion.
19
20. Subsystem 2: Typical Intensive Care Unit (ICU)
Patients move between the units:
• If no beds in CIC, move to SIC
• If no beds in MIC, move to CIC, else SIC, else NIC
• If no beds in SIC, move CIC
• If no beds in NIC, move to CIC, else SIC
20
21. Typical ICU Challenges
ICU Performance Issues
• Elective surgeries are usually scheduled for Operating Room block times
without taking into account the competing demand from emergency and
add-on surgeries for ICU resources.
• This practice results in:
Increased ICU diversion due to ‘no ICU beds’.
Increased rate of medical and quality issues due to staff overload and capacity
constraints.
Decreased patient throughput and hospital revenue.
High Level ICU Analysis Goal
• Establish a relationship between daily elective surgeries schedule,
emergency and add-on cases and ICU diversion.
• Given the number of the daily scheduled elective surgeries and the number
of unscheduled emergency and add-on admissions, predict ICU diversion
due to lack of available beds.
21
22. Baseline – Existing Number of Elective Cases
ICU Census:
Elective surgeries current pattern - No daily cap
Red zone:
Closed due to No ICU beds: 10.5 % of time
Critical census limit exceeded
51
50
49
48
47
46
45
44
cns
43
42
41
40
39
38
37
36 wk1 wk2 wk3 wk4 wk5 wk6 wk7 wk8 wk9 wk10 wk11 wk12 wk13 wk14 wk15 wk16 wk17
35
0 168 336 504 672 840 1008 1176 1344 1512 1680 1848 2016 2184 2352 2520 2688 2856 3024
Hrs/ weeks
22
23. Conclusions for Subsystem 2:
Intensive Care Unit
• There is a significant variation in the number of scheduled
elective cases between the same days of the different weeks
(Monday to Monday, Tuesday to Tuesday, and so on).
• Smoothing the number of elective cases over time (daily load
leveling) is a very significant factor which strongly affects ICU
closure time due to ‘no ICU beds.’
• Using Simulation it was demonstrated that daily load leveling of
elective cases to not more than 4 cases per day will result in a
very significant reduction of closure time due to ‘no ICU beds’
(from ~10.5% down to ~1%).
23
24. Subsystem 3: Operating Rooms (OR)
Typical Operational Challenges
• Is the number of general and specialized operating rooms and
pre/post operative beds adequate to meet the projected patient
flow and volume increases?
• If it is not, how many operating rooms and pre/post operative
beds would be needed?
• Is the Operating Room utilization adequate?
24
25. The following OR Operational performance
criteria were used
1. Patient delay to be admitted to a preoperative surgical bed should not
exceed 15 minutes.
2. Delay to enter operating room from a preoperative surgical bed should
not exceed:
General OR – 2 hours Urgent OR – 3 hours
Cardiovascular OR – 5 hours Neurosurgery OR – 3 hours
Orthopedic OR – 2 hours Cardiac Cath Lab – 2 hours
3. Percent of patients waiting longer than the acceptable delay to enter
operating room from a preoperative surgical bed should not exceed
5%.
4. Delay to enter PACU beds from an operating room should not exceed
5 minutes.
5. Average annual utilization of operating rooms should be in the range
of 60% to 90%.
25
26. The following simulation models
were developed and analyzed
Model 1: Baseline operations - all surgical services function as
currently specified between two floors. Construct two general operating
rooms onto upper level floor to serve otolaryngology, gastroenterology
and pulmonary patient volume from lower level floor.
Model 2: Move gastroenterology and pulmonary patient volume from
upper level to a separate service area.
Model 3: Separate service area for gastroenterology and pulmonary
patient volume includes 2 to 3 special procedure rooms, 1 to 2 general
OR, and 8 to 11 pre/post beds and PACU beds.
Total annual patient volume included in the simulation models is in the range from
15,000 to 17,000.
Decision variables were: The number of pre-operative beds and PACU beds,
number of Operating Rooms and special procedure rooms and their allocation for
surgical services.
26
28. Conclusions for Subsystem 3:
Operating Rooms (OR)
• Model 3 is selected as the best. Twelve Operating Rooms
and four Special Procedure Rooms/OR will be adequate to
handle patient volume up to the year 2013.
• Cath Lab capacity could become an issue by 2013 with
more than 10% of patients waiting longer than acceptable
limit 2 hours.
• All other performance criteria will be met.
28
29. Subsystem 4: Medical/Surgical
Nursing Units (NU)
Total number of specialized nursing units: 24
Total number of licensed beds: 380
Patient Length of Stay
(LOS) is in the range from
2 days to 10 days;
The most likely LOS is 5
days.
Census (i) (current period) = census (i-1) (previous period) +
[# admissions (i) – # discharges (i) ]; i = 1, 2, 3, …….
This is a dynamic balance of supply (beds) and demand (admissions).
29
30. Census (i) (current period) = census (i-1) (previous period) +
[# admissions (i) – # discharges (i) ]; i = 1, 2, 3, …….
Simulated Census. Capacity 380 beds
390
Mon Tue Wed Thu Fri Sat Sun
380
capacity limit
370
census
360
350
340
330
320
0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100 104 108 112 116 120 124 128 132 136 140 144 148 152 156 160 164 168
days/ hours
Take Away: Percent of time Nursing Units are full (% diversion) is about 16%.
30
31. Step 2
• Subsystems are reconnected in a way that
recaptures the interdependency between them.
• The entire system is re-analyzed using the output of
one subsystem as the input for another subsystem.
31
32. Step 2 – continued
• All subsystems are reconnected to each other.
• The output of one subsystem is the input for another subsystem.
32
33. Hospital System Simulation Summary
Too aggressive ED Downstream Less aggressive Downstream
Current
improvement: Units: Better ED improvement: Units: Better or
Performance Metrics State
patients admitted or worse than patients admitted words than current
Baseline
within 6 hours current state? within 10 hours state?
95% CI of the number of
patients waiting to get to 25 – 27 8 – 10 Better 17 – 19 Better
ED (ED in)
95% CI of the number of
patients waiting hospital 57 – 62 64 – 69 Worse 57 – 62 Neutral
admissions (ED out)
Number of patients left
not seen (LNS) after
waiting more than 2 23 – 32 0 Better 0–3 Better
hours
95% CI for % ED
diversion 22% – 23% 0.4% – 0.5% Better 6.8% – 7.3% Better
95% CI for % ICU
diversion 28% – 32% 30% – 34% Worse 28% – 32% Neutral
95% CI for % OR
diversion 12% – 13% 13% – 15% Worse 12% – 13% Neutral
95% CI for % floor NU
diversion 11% – 12% 11% – 12% Neutral 11% – 12% Neutral
33
34. Take-Away from Hospital System
Simulation Summary
Take Away
• Too aggressive ED improvement results in worsening
three out of seven hospital system performance metrics.
• Less aggressive ED improvement is more aligned with
the ability of downstream subsystems to handle
increased patient volume.
• This illustrates important Management System
Engineering Principles:
34
35. Important System Engineering Principles
• Improvement in the separate subsystems (local
optimization or local improvement) should not be
confused with the improvement of the entire system.
• A system of local improvements is not the best system;
it could be a very inefficient system.
• Analysis of an entire complex system is usually
incomplete and can be misleading without taking into
account subsystems’ interdependency.
35
36. Main Take-Away
Management Engineering helps to address the following typical
pressing hospital issues:
• How many beds are needed for each unit.
• How many procedure rooms are needed for each service.
• How many nurses/physicians should each unit schedule for the particular
day and night.
• How to reduce patient wait time and increase access to care.
• How to develop an efficient outpatient clinic schedule.
And so on, and so on…
And the Ultimate Goal:
How to manage hospital operations to increase profitability (reduce
costs, increase revenue) while keeping high quality, safety and
outcomes standards for patients.
36
37. Summary of Some Fundamental Management
Engineering Principles
• Systems behave differently than the sum of their independent
components.
• All other factors being equal, combined resources are more efficient
than specialized (dedicated) resources with the same total
capacity/workload.
• Scheduling appointments (jobs) in the order of their increased duration
variability (from lower to higher variability) results in a lower overall
cycle time and waiting time.
• Size matters. Large units with the same arrival rate (relative to its
size) always have a significantly lower waiting time. Large units can
also function at a much higher utilization % level than small units
with about the same patient waiting time.
• Work load leveling (smoothing) is an effective strategy to reduce
waiting time and improve patient flow.
37
38. Summary of Some Fundamental Management
Engineering Principles – continued
• Because of the variability of patient arrivals and service time, a
reserved capacity (sometimes up to 30%) is usually needed to
avoid regular operational problems due to unavailable beds.
• Generally, the higher utilization level of the resource (good for the
organization) the longer is the waiting time to get this resource
(bad for patient). Utilization level higher than 80% to 85% results
in a significant increase in waiting time for random patient
arrivals and random service time.
• In a series of dependent activities only a bottleneck defines the
throughput of the entire system. A bottleneck is a resource (or
activity) whose capacity is less than or equal to demand placed
on it.
38
39. Summary of Some Fundamental Management
Engineering Principles – continued
• An appointment backlog can remain stable even if the
average appointment demand is less than appointment
capacity.
• The time of peak congestion usually lags the time of the
peak arrival rate because it takes time to serve patients
from the previous time periods (service inertia).
• Reduction of process variability is the key to patient flow
improvement, increasing throughput and reducing delays.
39
40. Quiz
Q1. Improvement in the separate subsystems of the hospital system (local
improvement) can:
1) Make the entire system more efficient
2) Make no difference
3) Make the entire system less efficient
4) Both (2) and (3)
Q2. Improvement in ED patient throughput and capacity:
1) Is always a first priority
2) Can result in worsening in some other hospital operational metrics
3) Should be aligned with the ability of downstream subsystems to handle
increased patient volume
4) Both (2) and (3)
40
41. Answers
Q1. Improvement in the separate subsystems of the hospital system (local
improvement) can:
1) Make the entire system more efficient
2) Make no difference
3) Make the entire system less efficient
4) Both (2) and (3) – Correct answer
Q2. Improvement in ED patient throughput and capacity:
1) Is always a first priority
2) Can result in worsening in some other hospital operational metrics
3) Should be aligned with the ability of downstream subsystems to handle
increased patient volume
4) Both (2) and (3) – Correct answer
41
43. What is a Simulation Model?
A Simulation Model is the computer model that mimics the behavior of a
real complex system as it evolves over the time in order to visualize and
quantitatively analyze its performance in terms of:
• Cycle times.
• Wait times.
• Value added time.
• Throughput capacity.
• Resources utilization.
• Activities utilization.
• Any other custom collected process information.
• The Simulation Model is a tool to perform ‘what-if’ analysis and play
different scenarios of the model behavior as conditions and process
parameters change.
• This allows one to build various experiments on the computer model
and test the effectiveness of various solutions (changes) before
implementing the change.
44. How Does a Typical Simulation Model Work?
A simulation model tracks the move of entities through the system at distinct points
of time (thus, discrete events.) The detailed track is recorded of all processing
times and waiting times. In the end, the system’s statistics for entities and
activities is gathered.
Example of Manual Simulation (step by step)
Let’s consider a very simple system that consists of:
• a single patient arrival line.
• a single server.
Suppose that patient inter-arrival time is uniformly (equally likely) distributed between
1 min and 3 min. Service time is exponentially distributed with the average 2.5 min.
(Of course, any statistical distributions or non-random patterns can be used instead).
A few random numbers sampled from these two distributions are, for example:
Inter-arrival time, min Service time, min
2.6 1.4
2.2 8.8
1.4 9.1
2.4 1.8
…. ….
and so on… and so on….
44
45. We will be tracking any change (or event) that happened in the
system. A summary of what is happening in the system looks
like this:
Event # Time Event that happened in the system
1 2.6 First customer arrives. Service starts that should end at time = 4.
2 4 Service ends. Server waits for patient.
3 4.8 Second patient arrives. Service starts that should end at time = 13.6.
Server idle 0.8 minutes.
4 6.2 Third patient arrives. Joins the queue waiting for service.
5 8.6 Fourth patient arrives. Joins the queue waiting for service.
6 13.6 Second patient (from event 3) service ends. Third patient at the head of
the queue (first in, first out) starts service that should end at time 22.7.
7 22.7 Patient #4 starts service…and so on.
In this particular example, we were tracking events at discrete points in time
t = 2.6, 4.0, 4.8, 6.2, 8.6, 13.6, 22.7
DES models are capable of tracking hundreds of individual entities, each with its own unique set of
attributes, enabling one to simulate the most complex systems with interacting events and component
interdependencies.
45
46. Basic Elements of a Simulation Model
• Flow chart of the process: Diagram that depicts logical flow of a process
from its inception to its completion.
• Entities: Items to be processed (i.e. patients, documents, customers, etc.).
• Activities: Tasks performed on entities (i.e. medical procedures, document
approval, customer checkout, etc.).
• Resources: Agents used to perform activities and move entities (i.e. service
personnel, operators, equipment, nurses, physicians).
Connections:
• Entity arrivals: They define process entry points, time and quantities of
the entities that enter the system to begin processing.
• Entity routings: They define directions and logical condition flows for
entities (i.e. percent routing, conditional routing, routing on demand, etc.).
46
47. Typical Data Inputs Required to Feed the Model
• Entities, their quantities and arrival times
Periodic, random, scheduled, daily pattern, etc.
• Time the entities spend in the activities
This is usually not a fixed time but a statistical distribution. The wider
the time distribution, the higher the variability of the system behavior.
• The capacity of each activity
The maximum number of entities that can be processed concurrently in
the activity.
• The size of input and output queues for the activities (if needed).
• The routing type or the logical conditions for a specific routing.
• Resource Assignments
The number of resources, their availability, and/or resources shift
schedule.
47