This document summarizes a presentation on analyzing airport terminal effectiveness using multiple criteria and simulation optimization. It discusses modeling the terminal facilities planning problem (TFPP) as a discrete event simulation and multi-objective optimization problem. Specifically, it formulates the TFPP as a bi-criteria problem (2TFPP) to optimize configuration cost and average waiting time. It then describes solving the 2TFPP using a pre-computing phase to derive Pareto optimal solutions, followed by a decision-making phase where the decision maker selects the preferred solution based on their preferences.
Improving Manufacturing by Simulation: Processes, Microstructure & ToolingWilde Analysis Ltd.
This presentation, made at the inaugural Virtual Engineering Centre Workshop on 25-26th October 2011, provides an overview of the application of simulation to optimise manufacturing processes and determine mechanical properties that can affect in-service performance. These properties can be imported into structural FEA programs such as ANSYS for subsequent analysis of the final product.
Wilde Analysis believes that simulation techniques can play an important part in ensuring that parts are produced to a required standard and in an efficient way. Many of us are aware of simulation techniques such as finite element analysis (FEA) and computational fluid dynamics (CFD) being applied to product design.
These techniques are now used extensively in product development for applications such as checks on structural integrity or pressure drops in fluid applications. However, fewer people are aware of the application of these and similar techniques to design and optimise the manufacturing processes and how they can deliver benefits in areas such as metal forging, machining, heat treatment and the injection moulding.
The simulation of any one of these processes is technically demanding, but is now used extensively by many manufacturers, some of whom will not commit to making tools to produce a new part without first ‘proving’ the process using simulation. These simulations require advanced techniques including the modelling of non linear materials, large displacements, evolving contact surfaces and material removal in a multi-physics environment.
Having mastered the modelling of a single process, the technology is now being applied to multi-stage modelling to simulate multiple operations and predict final properties that can affect in-service performance. This presents many new challenges and for some applications it’s still at the research stage. Nevertheless, current technologies are now being used to optimise manufacturing processes.
Combining Phase Identification and Statistic Modeling for Automated Parallel ...Mingliang Liu
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time- and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPrime, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPrime benchmarks. They retain the original applications' performance characteristics, in particular their relative performance across platforms. Also, the result benchmarks, already released online, are much more compact and easy-to-port compared to the original applications.
http://dl.acm.org/citation.cfm?id=2745876
Improving Manufacturing by Simulation: Processes, Microstructure & ToolingWilde Analysis Ltd.
This presentation, made at the inaugural Virtual Engineering Centre Workshop on 25-26th October 2011, provides an overview of the application of simulation to optimise manufacturing processes and determine mechanical properties that can affect in-service performance. These properties can be imported into structural FEA programs such as ANSYS for subsequent analysis of the final product.
Wilde Analysis believes that simulation techniques can play an important part in ensuring that parts are produced to a required standard and in an efficient way. Many of us are aware of simulation techniques such as finite element analysis (FEA) and computational fluid dynamics (CFD) being applied to product design.
These techniques are now used extensively in product development for applications such as checks on structural integrity or pressure drops in fluid applications. However, fewer people are aware of the application of these and similar techniques to design and optimise the manufacturing processes and how they can deliver benefits in areas such as metal forging, machining, heat treatment and the injection moulding.
The simulation of any one of these processes is technically demanding, but is now used extensively by many manufacturers, some of whom will not commit to making tools to produce a new part without first ‘proving’ the process using simulation. These simulations require advanced techniques including the modelling of non linear materials, large displacements, evolving contact surfaces and material removal in a multi-physics environment.
Having mastered the modelling of a single process, the technology is now being applied to multi-stage modelling to simulate multiple operations and predict final properties that can affect in-service performance. This presents many new challenges and for some applications it’s still at the research stage. Nevertheless, current technologies are now being used to optimise manufacturing processes.
Combining Phase Identification and Statistic Modeling for Automated Parallel ...Mingliang Liu
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time- and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPrime, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPrime benchmarks. They retain the original applications' performance characteristics, in particular their relative performance across platforms. Also, the result benchmarks, already released online, are much more compact and easy-to-port compared to the original applications.
http://dl.acm.org/citation.cfm?id=2745876
Ernest: Efficient Performance Prediction for Advanced Analytics on Apache Spa...Spark Summit
Recent workload trends indicate rapid growth in the deployment of machine learning, genomics and scientific workloads using Apache Spark. However, efficiently running these applications on
cloud computing infrastructure like Amazon EC2 is challenging and we find that choosing the right hardware configuration can significantly
improve performance and cost. The key to address the above challenge is having the ability to predict performance of applications under
various resource configurations so that we can automatically choose the optimal configuration. We present Ernest, a performance prediction
framework for large scale analytics. Ernest builds performance models based on the behavior of the job on small samples of data and then
predicts its performance on larger datasets and cluster sizes. Our evaluation on Amazon EC2 using several workloads shows that our prediction error is low while having a training overhead of less than 5% for long-running jobs.
This paper explores the effectiveness of the recently devel- oped surrogate modeling method, the Adaptive Hybrid Functions (AHF), through its application to complex engineered systems design. The AHF is a hybrid surrogate modeling method that seeks to exploit the advantages of each component surrogate. In this paper, the AHF integrates three component surrogate mod- els: (i) the Radial Basis Functions (RBF), (ii) the Extended Ra- dial Basis Functions (E-RBF), and (iii) the Kriging model, by characterizing and evaluating the local measure of accuracy of each model. The AHF is applied to model complex engineer- ing systems and an economic system, namely: (i) wind farm de- sign; (ii) product family design (for universal electric motors); (iii) three-pane window design; and (iv) onshore wind farm cost estimation. We use three differing sampling techniques to inves- tigate their influence on the quality of the resulting surrogates. These sampling techniques are (i) Latin Hypercube Sampling
∗Doctoral Student, Multidisciplinary Design and Optimization Laboratory, Department of Mechanical, Aerospace and Nuclear Engineering, ASME student member.
†Distinguished Professor and Department Chair. Department of Mechanical and Aerospace Engineering, ASME Lifetime Fellow. Corresponding author.
‡Associate Professor, Department of Mechanical Aerospace and Nuclear En- gineering, ASME member (LHS), (ii) Sobol’s quasirandom sequence, and (iii) Hammers- ley Sequence Sampling (HSS). Cross-validation is used to evalu- ate the accuracy of the resulting surrogate models. As expected, the accuracy of the surrogate model was found to improve with increase in the sample size. We also observed that, the Sobol’s and the LHS sampling techniques performed better in the case of high-dimensional problems, whereas the HSS sampling tech- nique performed better in the case of low-dimensional problems. Overall, the AHF method was observed to provide acceptable- to-high accuracy in representing complex design systems.
Applying Linear Optimization Using GLPKJeremy Chen
A brief introduction to linear optimization with a focus on applying it with the high-quality open-source solver GLPK.
Originally prepared for an intra-department sharing session.
Machine Learning (ML) models are often composed as pipelines of operators, from “classical” ML operators to pre-processing and featurization operators. Current systems deploy pipelines as "black boxes”, where the same implementation of training is run for inference. This solution is convenient but leaves large room to improve performance and resource usage. This talk presents Pretzel, a framework for deployment of ML pipelines that is inspired to Database Systems: Pretzel inspects and optimizes pipelines end-to-end much like queries, and manages resources common to multiple pipelines such as operators' state. Pretzel is joint work with University of Seoul and Microsoft Research and has recently been presented at OSDI ’18. After the overview, this talk also shows experimental results of Pretzel against state-of-art ML solutions and discusses limitations and extensions.
Flowex - Railway Flow-Based Programming with Elixir GenStage.Anton Mishchuk
Flowex is a set of abstractions build on top Elixir GenStage which allows writing program with Flow-Based Programming paradigm.
I would say it is a mix of FBP and so-called Railway Oriented Programming (ROP) approach.
Flowex DSL allows you to easily create "pipelines" of Elixir GenStages.
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). To accomplish this, our system consumes a massive amount of events from different sources.
The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python/Tensorflow and Apache Flink as the streaming engine. Enablement of data science tools for machine learning and a process that allows for faster deployment is of growing importance for the business. Topics covered in this talk include:
* Examples for dynamic pricing based on real-time event streams, including location of driver, ride requests, user session event and based on machines learning models
* Comparison of legacy system and new streaming platform for dynamic pricing
* Processing live events in realtime to generate features for machine learning models
* Overview of streaming platform architecture and technology stack
* Apache Beam portability framework as bridge to distributed execution without code rewrite for JVM based streaming engine
* Lessons learned
Streaming your Lyft Ride Prices - Flink Forward SF 2019Thomas Weise
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python and Apache Flink as the streaming engine.
https://sf-2019.flink-forward.org/conference-program#streaming-your-lyft-ride-prices
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward
Streaming your Lyft Ride Prices
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). To accomplish this, our system consumes a massive amount of events from different sources.
The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python/Tensorflow and Apache Flink as the streaming engine. Enablement of data science tools for machine learning and a process that allows for faster deployment is of growing importance for the business. Topics covered in this talk include:
* Examples for dynamic pricing based on real-time event streams, including location of driver, ride requests, user session event and based on machines learning models
* Comparison of legacy system and new streaming platform for dynamic pricing
* Processing live events in realtime to generate features for machine learning models
* Overview of streaming platform architecture and technology stack
* Apache Beam portability framework as bridge to distributed execution without code rewrite for JVM based streaming engine
* Lessons learned
WRENCH enables novel avenues for scientific workflow use, research, development, and education. WRENCH capitalizes on recent and critical advances in the state of the art of distributed platform/application simulation. WRENCH builds on top of the open-source SimGrid simulation framework. SimGrid enables the simulation of large-scale distributed applications in a way that is accurate (via validated simulation models), scalable (low ratio of simulation time to simulated time, ability to run large simulations on a single computer with low compute, memory, and energy footprints), and expressive (ability to simulate arbitrary platform, application, and execution scenarios). WRENCH provides directly usable high-level simulation abstractions using SimGrid as a foundation. More information on https://wrench-project.org
In a nutshell, WRENCH makes it possible to:
- Prototype implementations of Workflow Management System (WMS) components and underlying algorithms;
- Quickly, scalably, and accurately simulate arbitrary workflow and platform scenarios for a simulated WMS implementation; and
- Run extensive experimental campaigns to conclusively compare workflow executions, platform architectures, and WMS algorithms and designs.
BKK16-308 The tool called Auto-Tuned Optimization System (ATOS)Linaro
ATOS is an Auto Tuning Optimization System that is able to find automatically the best performance/size tradeoff from a build system and a training application. The input of ATOS tools are a build command and a run command. From the build command, ATOS will infer an internal build configuration that it will run with different sets of compiler options. These build configurations are executed with the run command from which code size and performance will be extracted.
From the set of build configurations that ATOS explores, one can extract the preferred trade-off between code size and performance. The extracted build configuration can be archived and replayed later in order to generate the optimized executable without any modification into the initial build system.
The nice property of ATOS is that NO modification of the sources or the makefiles are needed. ATOS can work on any large/deep project, as soon as the compiler used is gcc or LLVM under Linux.
The magic behind your Lyft ride prices: A case study on machine learning and ...Karthik Murugesan
Rakesh Kumar and Thomas Weise explore how Lyft dynamically prices its rides with a combination of various data sources, ML models, and streaming infrastructure for low latency, reliability, and scalability—allowing the pricing system to be more adaptable to real-world changes.
Ernest: Efficient Performance Prediction for Advanced Analytics on Apache Spa...Spark Summit
Recent workload trends indicate rapid growth in the deployment of machine learning, genomics and scientific workloads using Apache Spark. However, efficiently running these applications on
cloud computing infrastructure like Amazon EC2 is challenging and we find that choosing the right hardware configuration can significantly
improve performance and cost. The key to address the above challenge is having the ability to predict performance of applications under
various resource configurations so that we can automatically choose the optimal configuration. We present Ernest, a performance prediction
framework for large scale analytics. Ernest builds performance models based on the behavior of the job on small samples of data and then
predicts its performance on larger datasets and cluster sizes. Our evaluation on Amazon EC2 using several workloads shows that our prediction error is low while having a training overhead of less than 5% for long-running jobs.
This paper explores the effectiveness of the recently devel- oped surrogate modeling method, the Adaptive Hybrid Functions (AHF), through its application to complex engineered systems design. The AHF is a hybrid surrogate modeling method that seeks to exploit the advantages of each component surrogate. In this paper, the AHF integrates three component surrogate mod- els: (i) the Radial Basis Functions (RBF), (ii) the Extended Ra- dial Basis Functions (E-RBF), and (iii) the Kriging model, by characterizing and evaluating the local measure of accuracy of each model. The AHF is applied to model complex engineer- ing systems and an economic system, namely: (i) wind farm de- sign; (ii) product family design (for universal electric motors); (iii) three-pane window design; and (iv) onshore wind farm cost estimation. We use three differing sampling techniques to inves- tigate their influence on the quality of the resulting surrogates. These sampling techniques are (i) Latin Hypercube Sampling
∗Doctoral Student, Multidisciplinary Design and Optimization Laboratory, Department of Mechanical, Aerospace and Nuclear Engineering, ASME student member.
†Distinguished Professor and Department Chair. Department of Mechanical and Aerospace Engineering, ASME Lifetime Fellow. Corresponding author.
‡Associate Professor, Department of Mechanical Aerospace and Nuclear En- gineering, ASME member (LHS), (ii) Sobol’s quasirandom sequence, and (iii) Hammers- ley Sequence Sampling (HSS). Cross-validation is used to evalu- ate the accuracy of the resulting surrogate models. As expected, the accuracy of the surrogate model was found to improve with increase in the sample size. We also observed that, the Sobol’s and the LHS sampling techniques performed better in the case of high-dimensional problems, whereas the HSS sampling tech- nique performed better in the case of low-dimensional problems. Overall, the AHF method was observed to provide acceptable- to-high accuracy in representing complex design systems.
Applying Linear Optimization Using GLPKJeremy Chen
A brief introduction to linear optimization with a focus on applying it with the high-quality open-source solver GLPK.
Originally prepared for an intra-department sharing session.
Machine Learning (ML) models are often composed as pipelines of operators, from “classical” ML operators to pre-processing and featurization operators. Current systems deploy pipelines as "black boxes”, where the same implementation of training is run for inference. This solution is convenient but leaves large room to improve performance and resource usage. This talk presents Pretzel, a framework for deployment of ML pipelines that is inspired to Database Systems: Pretzel inspects and optimizes pipelines end-to-end much like queries, and manages resources common to multiple pipelines such as operators' state. Pretzel is joint work with University of Seoul and Microsoft Research and has recently been presented at OSDI ’18. After the overview, this talk also shows experimental results of Pretzel against state-of-art ML solutions and discusses limitations and extensions.
Flowex - Railway Flow-Based Programming with Elixir GenStage.Anton Mishchuk
Flowex is a set of abstractions build on top Elixir GenStage which allows writing program with Flow-Based Programming paradigm.
I would say it is a mix of FBP and so-called Railway Oriented Programming (ROP) approach.
Flowex DSL allows you to easily create "pipelines" of Elixir GenStages.
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). To accomplish this, our system consumes a massive amount of events from different sources.
The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python/Tensorflow and Apache Flink as the streaming engine. Enablement of data science tools for machine learning and a process that allows for faster deployment is of growing importance for the business. Topics covered in this talk include:
* Examples for dynamic pricing based on real-time event streams, including location of driver, ride requests, user session event and based on machines learning models
* Comparison of legacy system and new streaming platform for dynamic pricing
* Processing live events in realtime to generate features for machine learning models
* Overview of streaming platform architecture and technology stack
* Apache Beam portability framework as bridge to distributed execution without code rewrite for JVM based streaming engine
* Lessons learned
Streaming your Lyft Ride Prices - Flink Forward SF 2019Thomas Weise
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python and Apache Flink as the streaming engine.
https://sf-2019.flink-forward.org/conference-program#streaming-your-lyft-ride-prices
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward
Streaming your Lyft Ride Prices
At Lyft we dynamically price our rides with a combination of various data sources, machine learning models, and streaming infrastructure for low latency, reliability and scalability. Dynamic pricing allows us to quickly adapt to real world changes and be fair to drivers (by say raising rates when there's a lot of demand) and fair to passengers (by let’s say offering to return 10 mins later for a cheaper rate). To accomplish this, our system consumes a massive amount of events from different sources.
The streaming platform powers pricing by bringing together the best of two worlds using Apache Beam; ML algorithms in Python/Tensorflow and Apache Flink as the streaming engine. Enablement of data science tools for machine learning and a process that allows for faster deployment is of growing importance for the business. Topics covered in this talk include:
* Examples for dynamic pricing based on real-time event streams, including location of driver, ride requests, user session event and based on machines learning models
* Comparison of legacy system and new streaming platform for dynamic pricing
* Processing live events in realtime to generate features for machine learning models
* Overview of streaming platform architecture and technology stack
* Apache Beam portability framework as bridge to distributed execution without code rewrite for JVM based streaming engine
* Lessons learned
WRENCH enables novel avenues for scientific workflow use, research, development, and education. WRENCH capitalizes on recent and critical advances in the state of the art of distributed platform/application simulation. WRENCH builds on top of the open-source SimGrid simulation framework. SimGrid enables the simulation of large-scale distributed applications in a way that is accurate (via validated simulation models), scalable (low ratio of simulation time to simulated time, ability to run large simulations on a single computer with low compute, memory, and energy footprints), and expressive (ability to simulate arbitrary platform, application, and execution scenarios). WRENCH provides directly usable high-level simulation abstractions using SimGrid as a foundation. More information on https://wrench-project.org
In a nutshell, WRENCH makes it possible to:
- Prototype implementations of Workflow Management System (WMS) components and underlying algorithms;
- Quickly, scalably, and accurately simulate arbitrary workflow and platform scenarios for a simulated WMS implementation; and
- Run extensive experimental campaigns to conclusively compare workflow executions, platform architectures, and WMS algorithms and designs.
BKK16-308 The tool called Auto-Tuned Optimization System (ATOS)Linaro
ATOS is an Auto Tuning Optimization System that is able to find automatically the best performance/size tradeoff from a build system and a training application. The input of ATOS tools are a build command and a run command. From the build command, ATOS will infer an internal build configuration that it will run with different sets of compiler options. These build configurations are executed with the run command from which code size and performance will be extracted.
From the set of build configurations that ATOS explores, one can extract the preferred trade-off between code size and performance. The extracted build configuration can be archived and replayed later in order to generate the optimized executable without any modification into the initial build system.
The nice property of ATOS is that NO modification of the sources or the makefiles are needed. ATOS can work on any large/deep project, as soon as the compiler used is gcc or LLVM under Linux.
The magic behind your Lyft ride prices: A case study on machine learning and ...Karthik Murugesan
Rakesh Kumar and Thomas Weise explore how Lyft dynamically prices its rides with a combination of various data sources, ML models, and streaming infrastructure for low latency, reliability, and scalability—allowing the pricing system to be more adaptable to real-world changes.
The magic behind your Lyft ride prices: A case study on machine learning and ...
Miroforidis Slides PP97-2003
1. Multiple Criteria Analysis of the Airport
Terminal Effectiveness by Multi-objective
Optimization and Simulation
ICMSDM ′2016
Janusz Miroforidis, Ph.D.
Systems Research Institute,
Polish Academy of Sciences,
Warsaw, Poland
2. 2
Presentation plan
Terminal Facilities Planning Problem (TFPP)
Discrete-event simulation model for TFPP
Multi-objective methodology
Bi-criteria formulation of TFPP (2TFPP)
Solving 2TFPP
Conclusions
3. 3
Terminal Facilities
Planning Problem (TFPP)
Departure Terminal — a complex system
• Passengers ‒
terminal facilities
interaction (check-in
desks, security
control desks, stairs,
etc.)
• Passenger behaviour
• Passenger flow
Source: http://www.businesstraveller.com/files/News-images/Gatwick-airport/
4. 4
TFPP (cont.)
The most general formulation
Find the best configuration of an airport terminal facilities, taking
into account: passenger arrival pattern connected to the flight
schedule; passenger moving pattern inside the terminal;
passenger service level
• How to describe configurations and the terminal operation?
• How to evaluate a configuration in a real-life scenario?
• What does „the best configuration” really mean?
• Is it worth to consider a multiple criteria formulation of TFPP?
(Yes, it is!)
5. 5
Discrete-event simulation model for TFPP
Departure terminal — a network of service nodes with
waiting queues
— a configuration, i.e. (4, 2, 2)
6. 6
The network of service nodes with waiting queues
(may be a complex graph)
Input:
Discrete-event simulation model for TFPP
(cont.)
Output:
•Avg. queue waiting time
•Avg. queue length
•Prob. of an event
•Other indicators
Model:
Output — in general, hard to give it by analytical formulas!
7. 7
The discrete-event simulation model of
a departure terminal
Input:
Discrete-event simulation model for TFPP
(cont.)
Output:
•Avg. queue waiting time
•Avg. queue length
•Prob. of an event
•Other indicators
JaamSim
Simulation Engine
+ Model:
Output — relatively easy to obtain by simulation runs!
16. 16
Solving 2TFPP (cont.)
Decision-making phase — all steps
The solution to 2TFPP:
configuration (5, 3, 2) and its
outcome (configuration cost:
15 units, avg. waiting time:
13.086 minutes).
Hyphotetical decisio-
making phase!
17. 17
Conclusions
Accurate discrete-event simulation model of
a departure terminal is requested (it can be costly!)
All objective functions should precisely reflect reality
More than two criteria?
Continuous decision variables? (the presented method can
be used after a discretization of such variables)
Deriving of efficient configurations during the decision-
making phase may be a better solution (no pre-computing
phase)
Solving multiple criteria TFPP in a real-life scenario
using presented decision-making framework