Presented in this document is a short discussion on using IMPL’s SLPQPE algorithm to solve process optimization problems in either off- or on-line environments also known as real-time optimization (RTO). Process optimization is somewhat different than production optimization in the sense that there are more “constitutive relations” involving only intensive variables. Both types of optimizations involve “conservation laws” and “correlative equations” which usually involve a mix of extensive and intensive variables (Kelly, 2004). Whereas production optimization deals more with material, meta-material (nonlinear), logic and logistics (discrete) balances (Zyngier and Kelly, 2009 and Kelly and Zyngier, 2015), process optimization is inherently more detailed and includes energy, exergy, momentum, hydraulics, equilibrium, diffusion, kinetics and other types of transport phenomena which involve nonlinear and perhaps discontinuous functions (Pantelides and Renfro, 2012).
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationeArtius, Inc.
Hybrid Multi-Gradient Explorer (HMGE) algorithm for global multi-objective
optimization of objective functions considered in a multi-dimensional domain is presented. The proposed hybrid algorithm relies on genetic variation operators for creating new solutions, but in addition to a standard random mutation operator, HMGE
uses a gradient mutation operator, which improves convergence. Thus, random mutation helps find global Pareto frontier, and gradient mutation improves convergence to the
Pareto frontier. In such a way HMGE algorithm combines advantages of both
gradient-based and GA-based optimization techniques: it is as fast as a pure gradient-based MGE algorithm, and is able to find the global Pareto frontier similar to genetic algorithms
(GA). HMGE employs Dynamically Dimensioned Response Surface Method (DDRSM) for calculating gradients. DDRSM dynamically recognizes the most significant design variables, and builds local approximations based only on the variables. This allows one to
estimate gradients by the price of 4-5 model evaluations without significant loss of accuracy. As a result, HMGE efficiently optimizes highly non-linear models with dozens and hundreds of design variables, and with multiple Pareto fronts. HMGE efficiency is 2-10
times higher when compared to the most advanced commercial GAs.
ROBUST OPTIMIZATION FOR RCPSP UNDER UNCERTAINTYijseajournal
The aim of the present article is to optimize the robustness objective for the Resource-Constrained Project
scheduling Problem (RCPSP) dealing with activity duration uncertainty. The studied robustness consists in
minimizing the worst-case performance, referred to as the min-max robustness objective, among a set of
initial scenarios. We propose an enhanced GRASP approach as a solution to the given scenario-based
robust model. This approach is based on different priority rules in the construction phase and a forwardbackward
heuristic in the improvement phase. We investigate two different benchmark data sets, the
Patterson set and the PSPLIB J30 set. Experiments show that the proposed enhanced GRASP outperforms
the basic procedure, and a based-evolutionary algorithm, in robustness optimization.
A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...IOSRJM
In this paper, A Partial flexible, open-shop scheduling problem (FOSP) is a combinatorial optimization problem. This work, with the objective of optimizing the makespan of an FOSP uses a hybrid Pareto based optimization (HPBO) approach. The problems are tested on Taillard’s benchmark problems. The consequences of Nawaz, Encore and Ham (NEH) meta heuristic are introduced to the HPBO to direct the search into a quality space. Variable neighbourhood search for (VNS) is employed to overcome the early convergence of the HPBO and helps in global search. The results are compared with standalone HPBO, traditional meta heuristics and the Taillard’s upper bounds. Five problem locate are taken from Taillard’s benchmark problems and are solved for various problem sizes. Thus a total of 35 problems is given to explain. The experimental results show that the solution quality of FOSP can be improved if the search is directed in a quality space in light of the proposed LHPBO approach (LHPBO-NEH-VNS).
Secure Document Shredding Service, how to select the best location to drop-off your old materials for shredding around Boston MA and Southern New Hampshire. Discover the convenience and the great savings when you drop-off your documents for shredding at https://mydocumentshredding.net
Here at Chameleon Print, we also manufacture all kind of graphics for your car, van, boat, what ever kind of vehicle you have. We have experts to professionally wrap your whole vehicle or we can just create bumper stickers to your design.
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationeArtius, Inc.
Hybrid Multi-Gradient Explorer (HMGE) algorithm for global multi-objective
optimization of objective functions considered in a multi-dimensional domain is presented. The proposed hybrid algorithm relies on genetic variation operators for creating new solutions, but in addition to a standard random mutation operator, HMGE
uses a gradient mutation operator, which improves convergence. Thus, random mutation helps find global Pareto frontier, and gradient mutation improves convergence to the
Pareto frontier. In such a way HMGE algorithm combines advantages of both
gradient-based and GA-based optimization techniques: it is as fast as a pure gradient-based MGE algorithm, and is able to find the global Pareto frontier similar to genetic algorithms
(GA). HMGE employs Dynamically Dimensioned Response Surface Method (DDRSM) for calculating gradients. DDRSM dynamically recognizes the most significant design variables, and builds local approximations based only on the variables. This allows one to
estimate gradients by the price of 4-5 model evaluations without significant loss of accuracy. As a result, HMGE efficiently optimizes highly non-linear models with dozens and hundreds of design variables, and with multiple Pareto fronts. HMGE efficiency is 2-10
times higher when compared to the most advanced commercial GAs.
ROBUST OPTIMIZATION FOR RCPSP UNDER UNCERTAINTYijseajournal
The aim of the present article is to optimize the robustness objective for the Resource-Constrained Project
scheduling Problem (RCPSP) dealing with activity duration uncertainty. The studied robustness consists in
minimizing the worst-case performance, referred to as the min-max robustness objective, among a set of
initial scenarios. We propose an enhanced GRASP approach as a solution to the given scenario-based
robust model. This approach is based on different priority rules in the construction phase and a forwardbackward
heuristic in the improvement phase. We investigate two different benchmark data sets, the
Patterson set and the PSPLIB J30 set. Experiments show that the proposed enhanced GRASP outperforms
the basic procedure, and a based-evolutionary algorithm, in robustness optimization.
A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...IOSRJM
In this paper, A Partial flexible, open-shop scheduling problem (FOSP) is a combinatorial optimization problem. This work, with the objective of optimizing the makespan of an FOSP uses a hybrid Pareto based optimization (HPBO) approach. The problems are tested on Taillard’s benchmark problems. The consequences of Nawaz, Encore and Ham (NEH) meta heuristic are introduced to the HPBO to direct the search into a quality space. Variable neighbourhood search for (VNS) is employed to overcome the early convergence of the HPBO and helps in global search. The results are compared with standalone HPBO, traditional meta heuristics and the Taillard’s upper bounds. Five problem locate are taken from Taillard’s benchmark problems and are solved for various problem sizes. Thus a total of 35 problems is given to explain. The experimental results show that the solution quality of FOSP can be improved if the search is directed in a quality space in light of the proposed LHPBO approach (LHPBO-NEH-VNS).
Secure Document Shredding Service, how to select the best location to drop-off your old materials for shredding around Boston MA and Southern New Hampshire. Discover the convenience and the great savings when you drop-off your documents for shredding at https://mydocumentshredding.net
Here at Chameleon Print, we also manufacture all kind of graphics for your car, van, boat, what ever kind of vehicle you have. We have experts to professionally wrap your whole vehicle or we can just create bumper stickers to your design.
arvato accompagne les entreprises dans leur expansion en Europearvato France
L’accès à un nouveau marché étranger est toujours un challenge pour les entreprises. Même lorsque le produit est convaincant, de nombreux obstacles sont à franchir pour réussir une expansion internationale. Afin d’aider les entreprises à surmonter ces difficultés, arvato vient de publier le livre blanc GlobeX en collaboration avec l’entreprise de conseil SVG Partners. Le guide s’adresse notamment aux entreprises technologiques américaines qui prévoient une expansion sur les marchés européens.
Congregation P'nai Tikvah honored Rabbi Yocheved Mintz with this special slide presentation on June 18, 2016 at the 100 Blessings: Havdallah Celebration.
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of what we call "Partitioning" and "Positioning". Partitioning is the notion of decomposing the problem into smaller sub-problems along its “hierarchical” (Kelly and Zyngier, 2008), “structural” (Kelly and Mann, 2004), “operational” (Kelly, 2006), “temporal” (Kelly, 2002) and now “phenomenological” (Kelly, 2003, Kelly and Mann, 2003, Kelly and Zyngier, 2014 and Menezes, 2014) dimensions. Positioning is the ability to configure the lower and upper hard bounds and target soft bounds for any time-period over the future time-horizon within the problem or sub-problem and is especially useful to fix variables (i.e., its lower and upper bounds are set equal) which will ultimately remove or exclude these variables from the solver’s model or matrix.
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...Alkis Vazacopoulos
Presented in this short document is a description of how to model and solve advanced parameter estimation (APE) problems in IMPL. APE is the term given to the application of estimating, fitting or calibrating parameters in models involving a network, topology, superstructure or flowsheet. When estimating parameters with multiple linear regression (MLR), ordinary least squares (OLS), ridge regression (RR), principal component regression (PCR) and partial least squares (PLS) there is no explicit model but simply an X-block and Y-block of data. Hence, these methods are referred to as “non-parametric” or “data-based” methods as opposed to the “parametric” or “model-based” method used here. To solve these types of problems we use what is commonly referred to as “error-in-variables” (EIV) regression which is conveniently implemented as nonlinear data reconciliation and regression (NDRR) using the technology found in Kelly (1998a; 1998b; 1999) and Kelly and Zyngier (2008a). The primary benefit of using EIV (NDRR) over the other regression methods is that we can easily handle the inclusion of conservation laws and constitutive relations, explicitly, a must for any industrial estimation problem (IEP).
Presented in this short document is a description of modeling and solving partial differential equations (PDE’s) in both the temporal and spatial dimensions using IMPL. The sample PDE problem is taken from Cutlip and Shacham (1999 and 2014) and models the process of unsteady-state heat transfer or conduction in a one dimensional (1D) slab with one face insulated and constant thermal conductivity as discussed by Geankoplis (1993).
Presented in this short document is a description of what we call "Advanced" Property Tracking or Tracing (APT). APT is the term given to the technique of predicting, simulating, calculating or estimating the properties (i.e., densities, compositions, conditions, qualities, etc.) in a network or superstructure with significant inventory using statistical data reconciliation and regression (DRR)
Presented in this short document is a description of what we call "Phasing" and "Planuling". Phasing is a variation of the sequence-dependent changeover problem (Kelly and Zyngier, 2007, Balas et. al., 2008) except that the sequencing, cycling or phasing is fixed as opposed to being variable or free. Planuling is a portmanteau of planning and scheduling where we "schedule" slow processes and we "plan" fast processes together inside the same time-horizon and can also be considered as "hybrid" planning and scheduling.
Presented in this short document is a description of what is called a “Pipeline Scheduling Optimization Problem” and was first described in Rejowski and Pinto (2003) where they modeled the first-in-first-out (FIFO) and multi-product nature of the segregated pipeline using both discretized space (multi-batches, packs or pipes) and time (multi-intervals, slots or periods). The same MILP model can also be found in Zyngier and Kelly (2009) along with other related production/process objects.
Smooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to SchedulingAlkis Vazacopoulos
This article describes an effective and simple primal heuristic to greedily encourage a reduction in the number of binary or 0-1 logic variables before an implicit enumerative-type search heuristic is deployed to find integer-feasible solutions to “hard” production scheduling problems. The basis of the technique is to employ well-known smoothing functions used to solve complementarity problems to the local optimization problem of minimizing the weighted sum over all binary variables the product of themselves multiplied by their complement. The basic algorithm of the “smooth-and-dive accelerator” (SDA) is to solve successive linear programming (LP) relaxations with the smoothing functions added to the existing problem’s objective function and to use, if required, a sequence of binary variable fixings known as “diving”. If the smoothing function term is not driven to zero as part of the recursion then a branch-and-bound or branch-and-cut search heuristic is called to close the procedure finding at least integer-feasible primal infeasible solutions. The heuristic’s effectiveness is illustrated by its application to an oil-refinery’s crude-oil blendshop scheduling problem, which has commonality to many other production scheduling problems in the continuous and semi-continuous (CSC) process domains.
Presented in this short document is a description of what is called Advanced Process Monitoring (APM) as described by Hedengren (2013). APM is the term given to the technique of estimating unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment and is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid).
arvato accompagne les entreprises dans leur expansion en Europearvato France
L’accès à un nouveau marché étranger est toujours un challenge pour les entreprises. Même lorsque le produit est convaincant, de nombreux obstacles sont à franchir pour réussir une expansion internationale. Afin d’aider les entreprises à surmonter ces difficultés, arvato vient de publier le livre blanc GlobeX en collaboration avec l’entreprise de conseil SVG Partners. Le guide s’adresse notamment aux entreprises technologiques américaines qui prévoient une expansion sur les marchés européens.
Congregation P'nai Tikvah honored Rabbi Yocheved Mintz with this special slide presentation on June 18, 2016 at the 100 Blessings: Havdallah Celebration.
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of what we call "Partitioning" and "Positioning". Partitioning is the notion of decomposing the problem into smaller sub-problems along its “hierarchical” (Kelly and Zyngier, 2008), “structural” (Kelly and Mann, 2004), “operational” (Kelly, 2006), “temporal” (Kelly, 2002) and now “phenomenological” (Kelly, 2003, Kelly and Mann, 2003, Kelly and Zyngier, 2014 and Menezes, 2014) dimensions. Positioning is the ability to configure the lower and upper hard bounds and target soft bounds for any time-period over the future time-horizon within the problem or sub-problem and is especially useful to fix variables (i.e., its lower and upper bounds are set equal) which will ultimately remove or exclude these variables from the solver’s model or matrix.
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...Alkis Vazacopoulos
Presented in this short document is a description of how to model and solve advanced parameter estimation (APE) problems in IMPL. APE is the term given to the application of estimating, fitting or calibrating parameters in models involving a network, topology, superstructure or flowsheet. When estimating parameters with multiple linear regression (MLR), ordinary least squares (OLS), ridge regression (RR), principal component regression (PCR) and partial least squares (PLS) there is no explicit model but simply an X-block and Y-block of data. Hence, these methods are referred to as “non-parametric” or “data-based” methods as opposed to the “parametric” or “model-based” method used here. To solve these types of problems we use what is commonly referred to as “error-in-variables” (EIV) regression which is conveniently implemented as nonlinear data reconciliation and regression (NDRR) using the technology found in Kelly (1998a; 1998b; 1999) and Kelly and Zyngier (2008a). The primary benefit of using EIV (NDRR) over the other regression methods is that we can easily handle the inclusion of conservation laws and constitutive relations, explicitly, a must for any industrial estimation problem (IEP).
Presented in this short document is a description of modeling and solving partial differential equations (PDE’s) in both the temporal and spatial dimensions using IMPL. The sample PDE problem is taken from Cutlip and Shacham (1999 and 2014) and models the process of unsteady-state heat transfer or conduction in a one dimensional (1D) slab with one face insulated and constant thermal conductivity as discussed by Geankoplis (1993).
Presented in this short document is a description of what we call "Advanced" Property Tracking or Tracing (APT). APT is the term given to the technique of predicting, simulating, calculating or estimating the properties (i.e., densities, compositions, conditions, qualities, etc.) in a network or superstructure with significant inventory using statistical data reconciliation and regression (DRR)
Presented in this short document is a description of what we call "Phasing" and "Planuling". Phasing is a variation of the sequence-dependent changeover problem (Kelly and Zyngier, 2007, Balas et. al., 2008) except that the sequencing, cycling or phasing is fixed as opposed to being variable or free. Planuling is a portmanteau of planning and scheduling where we "schedule" slow processes and we "plan" fast processes together inside the same time-horizon and can also be considered as "hybrid" planning and scheduling.
Presented in this short document is a description of what is called a “Pipeline Scheduling Optimization Problem” and was first described in Rejowski and Pinto (2003) where they modeled the first-in-first-out (FIFO) and multi-product nature of the segregated pipeline using both discretized space (multi-batches, packs or pipes) and time (multi-intervals, slots or periods). The same MILP model can also be found in Zyngier and Kelly (2009) along with other related production/process objects.
Smooth-and-Dive Accelerator: A Pre-MILP Primal Heuristic applied to SchedulingAlkis Vazacopoulos
This article describes an effective and simple primal heuristic to greedily encourage a reduction in the number of binary or 0-1 logic variables before an implicit enumerative-type search heuristic is deployed to find integer-feasible solutions to “hard” production scheduling problems. The basis of the technique is to employ well-known smoothing functions used to solve complementarity problems to the local optimization problem of minimizing the weighted sum over all binary variables the product of themselves multiplied by their complement. The basic algorithm of the “smooth-and-dive accelerator” (SDA) is to solve successive linear programming (LP) relaxations with the smoothing functions added to the existing problem’s objective function and to use, if required, a sequence of binary variable fixings known as “diving”. If the smoothing function term is not driven to zero as part of the recursion then a branch-and-bound or branch-and-cut search heuristic is called to close the procedure finding at least integer-feasible primal infeasible solutions. The heuristic’s effectiveness is illustrated by its application to an oil-refinery’s crude-oil blendshop scheduling problem, which has commonality to many other production scheduling problems in the continuous and semi-continuous (CSC) process domains.
Presented in this short document is a description of what is called Advanced Process Monitoring (APM) as described by Hedengren (2013). APM is the term given to the technique of estimating unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment and is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid).
Recent Advances in Flower Pollination AlgorithmEditor IJCATR
Flower Pollination Algorithm (FPA) is a nature inspired algorithm based on pollination process of plants. Recently, FPA
has become a popular algorithm in the evolutionary computation field due to its superiority to many other algorithms. As a
consequence, in this paper, FPA, its improvements, its hybridization and applications in many fields, such as operations research,
engineering and computer science, are discussed and analyzed. Based on its applications in the field of optimization it was seemed that
this algorithm has a better convergence speed compared to other algorithms. The survey investigates the difference between FPA
versions as well as its applications. To add to this, several future improvements are suggested.
TEACHING AND LEARNING BASED OPTIMISATIONUday Wankar
Teaching–Learning-Based Optimization (TLBO) seems to be a rising star from amongst a number of metaheuristics with relatively competitive performances. It is reported that it outperforms some of the well-known metaheuristics regarding constrained benchmark functions, constrained mechanical design, and continuous non-linear numerical optimization problems. Such a breakthrough has steered us towards investigating the secrets of TLBO’s dominance. This report’s findings on TLBO qualitatively and quantitatively through code-reviews and experiments, respectively.
Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule ApproachAlkis Vazacopoulos
Highlighted in this article is a closed-shop scheduling heuristic which makes use of the traditional priority dispatch rule approach found in open-shop scheduling such as job-shop scheduling. Instead of prioritizing and scheduling one job or project (or stock-order) at a time, we schedule one stock or stock-group at a time where a stock-group is a collection of individual stocks and their one or more stock-orders. These stocks can be feed-stocks, intermediate-stocks or product-stocks of which we focus on product-stocks given that most production is demand-driven. A key feature of this heuristic is our ability to compress the production network or superstructure so that only those unit-operations necessary to produce the stocks in question are included in the model thus reducing the size of the problem considerably at each iteration of the heuristic. The stock-specific network compression technique uses what we call a unit-capacity transshipment linear program to successively determine which unit-operations are redundant when making a particular stock. This heuristic is also particularly useful for those process industries that can potentially produce many product-stocks but only a fraction of these are produced within the scheduling horizon whereby the model is significantly reduced at solve time to include only those stocks that are demanded whereby redundant unit-operations are removed. An illustrative example is provided with recycle loops (i.e., stock flow-reversals) and shared units or equipment (i.e., unit flow-reversals) that demonstrates the effectiveness and efficiency of the technique.
We tested ODH|CPLEX 4.24 on Miplib Open-v7 Models, a public collection of 286 models to which and optimal solution has not been proven. 257 of these are known to have a feasible solution.
ODH|CPLEX proved optimality on 6 models and found better solutions in 2 hours, to 40% of the models with 12 threads and 35% with 8 threads. ODH|CPLEX matched on 21% of the models.
EX Optimization Studio* solves large-scale optimization problems and enables better business decisions and resulting financial benefits in areas such as supply chain management, operations, healthcare, retail, transportation, logistics and asset management. It has been applied in sectors as diverse as manufacturing, processing, distribution, retailing, transport, finance and investment. CPLEX Optimization Studio is an analytical decision support toolkit for rapid development and deployment of optimization models using mathematical and constraint programming. It combines an integrated development environment (IDE) with the powerful Optimization Programming Language (OPL) and high-performance ILOG CPLEX optimizer solvers. CPLEX Optimization Studio enables clients to: Optimize business decisions with high-performance optimization engines. Develop and deploy optimization models quickly by using flexible interfaces and prebuilt deployment scenarios. Create real-world applications that can significantly improve business outcomes. Optimization Direct has partnered with and entered into a technology licensing and distribution agreement with IBM. By combining the founders' industry and software experience and IBM’s CPLEX Optimization Studio product with the arsenal of Optimization modeling and solving tools from IBM provides customers the most powerful capabilities in the industry.
Missing-Value Handling in Dynamic Model Estimation using IMPL Alkis Vazacopoulos
Presented in this short document is a description of how IMPL handles missing-values or missing-data when estimating dynamic models which inherently involve time-lagged or time-shifted input and output variables. Missing-values in a data set imply that for some reason the data is not available most likely due to a mal-functioning instrument or even lack of proper accounting. Missing-data handling is relatively well-studied especially for time-series or dynamic data given that it is not as easy as removing, ignoring or deleting bad sections of data when static or steady-state models are calibrated (Honaker and King, 2010; Smits and Baggelaar, 2010; Fisher and Waclawski, 2015). Unfortunately, all of their methods involve what is known as “imputation” i.e., replacing or substituting missing-data with some reasonably assumed value which is at the very least is a biased estimate. When regression techniques such as PLS and PCR are used (Nelson et. al., 2006) then missing-data can be handled without imputation by computing the input-output covariance matrices excluding the contribution from the missing-values given the temporal and structural redundancy in the system. However, it is shown in Dayal (1996) that using PLS and other types of regression techniques such as Canonical Correlation Regression (CCR) and Reduced Rank Regression (RRR) to fit non-parsimonious and non-parametric finite impulse/step response models (FIR/FSR), that this is not as reliable as fitting lower-ordered transfer functions especially considering the robust stability of the resulting model predictive controller if that is its intended use.
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate deterministic and stochastic non-parametric finite impulse response (FIR) models in IMPL applied to industrial gas furnace data identical to that found in TSE-GFD-IMF using parametric transfer-functions. The methodology of time-series analysis or system identification involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment/adjustment, auto-, cross- and partial-correlation, etc. to establish the parametric transfer function polynomial degrees especially when we are using non-parametric FIR estimation. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s Sequential Equality-Constrained QP Engine (SECQPE) and Supplemental Observability, Redundancy and Variability Estimator (SORVE). Other types of non-parametric identification known as Subspace Identification (Qin, 2006) and can used to estimate state-space models.
Our Industrial Modeling Service (IMS) involves several important (but rarely implemented) methods to significantly improve and advance your existing models and data. Since it is well-known that good decision-making requires good models and data, IMS is ideally suited to support this continuous-improvement endeavour. IMS is specifically designed to either co-exist with your existing design, planning, scheduling, etc. applications or these same models and data can be used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create new value-added applications. The following techniques form the basis of our IMS offering.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
Presented in this short document is a description of how to model and solve multi-utility scheduling optimization (MUSO) problems in IMPL. Multi-utility systems (co/tri-generation) are typically found in petroleum refineries and petrochemical plants (multi-commodity systems) especially when fuel-gas (i.e., off-gases of methane and ethane) is a co- or by-product of the production from which multi-pressure heating-, motive- and process-steam are generated on-site. Other utilities include hydrogen, electricity, water, cooling media, air, nitrogen, chemicals, etc. where a multi-utility system is shown in Figure 1 with an intermediate or integrated utility (both produced and consumed) such as fuel-gas, steam or electricity. Itemized benefit areas just for better management of an integrated steam network can be found in Pelham (2013) where his sample multi-pressure steam utility flowsheet is found in Figure 2.
Presented in this short document is a description of what is well-known as Advanced Process Control (APC) applied to a small linear three (3) manipulated variable (MV) by two (2) controlled variable (CV) problem. These problems are also known as Model Predictive Control (MPC) (Grimm et. al., 1989) and Moving Horizon Control (MHC). Figure 1 shows the 3 x 2 APC problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012) as an Advanced Planning and Scheduling (APS) problem as opposed to a traditional APC problem.
Although there is a tremendous amount of stability, performance and robustness theory associated with APC which can be directly assumed to APS problems (Mastragostino et. al., 2014), our approach is to show that APC can equally be set into an APS framework except that APS has far less sensitivity technology due to its inherent discrete and nonlinear modeling complexities i.e., especially non-convexities. In order to eliminate the steady-state offset between the actual value and its target, it is well-known to apply bias-updating though other forms of “parameter-feedback” is possible. Typically, APS applications only employ “variable-feedback” i.e., opening or initial inventories, properties, etc. but this alone will not alleviate the steady-state offset as demonstrated by Kelly and Zyngier (2008).
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Quantum Computing: Current Landscape and the Future Role of APIs
Impl slpqp ev-sqp
1. IMPL-SLPQPE versus SQP for Process Optimization
(IMPL-SLPQPEvSQP)
i n d u s t r IAL g o r i t h m s LLC. (IAL)
www.industrialgorithms.com
May 2014
Introduction
Presented in this document is a short discussion on using IMPL’s SLPQPE algorithm to solve
process optimization problems in either off- or on-line environments also known as real-time
optimization (RTO). Process optimization is somewhat different than production optimization in
the sense that there are more “constitutive relations” involving only intensive variables. Both
types of optimizations involve “conservation laws” and “correlative equations” which usually
involve a mix of extensive and intensive variables (Kelly, 2004). Whereas production
optimization deals more with material, meta-material (nonlinear), logic and logistics (discrete)
balances (Zyngier and Kelly, 2009 and Kelly and Zyngier, 2015), process optimization is
inherently more detailed and includes energy, exergy, momentum, hydraulics, equilibrium,
diffusion, kinetics and other types of transport phenomena which involve nonlinear and perhaps
discontinuous functions (Pantelides and Renfro, 2012).
The purpose of this short discussion is to highlight that IMPL’s SLPQPE algorithm, which is a
relatively standard implementation of successive linear programming (SLP) found in Palacios-
Gomez et. al. (1982) and Zhang et. al. (1985), can be used effectively and efficiently to solve
industrial-sized process industry optimization problems usually solved using successive
quadratic programming (SQP) algorithms such as IPOPT, CONOPT, filterSQP, KNITRO,
SNOPT, WORHP, etc.
Starting in the late 1980’s Shell Oil Company invested a tremendous amount of resources and
effort to develop and deploy real-time process optimization around the world but especially in
the United States and Canada. At the time SLP and a more primitive form referred to as
Method of Approximate Programming (MAP) (and later Distributive Recursion (DR)), had been
used successfully since the 1960’s for nonlinear planning (Griffith and Stewart, 1961) to perform
various types of applications such as feedstock selection, refinery process target-setting
(preliminary scheduling or operations programming) and product portfolio optimization and was
also being considered as the solver. To help determine which solver to use, Dr. Mike Morshedi
(previously with DMC Corportation, Dot Products Inc., PAS and Honeywell) devised a small test
problem described below to show that SQP was superior to SLP for process optimization given
that process models usually involve many terms with a variable times itself which leads to
diagonal elements in the Hessian i.e., the second-order derivatives matrix of the Lagrange
function (Renfro, 2010). During this period, even though SLP was popular for nonlinear
planning it was considered to be inferior to SQP given recent academic studies at the time.
The well-known disadvantage of SLP is that to converge on super-basic variables i.e., degrees-
of-freedom at the solution (variables that are neither basic nor non-basic), that many more SLP
major iterations are required. This is absolutely true if we try to solve a QP problem with an SLP
algorithm but our approach with SLPQPE is to call a QP sub-problem when quadratic variables
exist in the objective function which drastically reduces the number of major iterations. This is
especially effective for solving nonlinear data reconciliation and parameter estimation problems
or other hierarchical-types of optimization problems (Kelly and Zyngier, 2008a) where we need
2. to minimize deviations from targets or setpoints in a 2-norm sense such as advanced nonlinear
process control strategies and decomposed, layered, tiered or coordinated optimization
approaches.
However, a well-known advantage of SLP is that if the Hessian of the Lagrange function is
indefinite i.e., has both positive and negative eigenvalues (defined as its inertia), then this by
itself encourages vertex or LP types of solutions and as such requires less SLP major iterations
where there are smaller numbers of super-basic variables at the solution. This explains to some
degree why SQP’s applied to nonlinear planning problems with multiple time-periods have not
been successfully applied although has been attempted for example by Imperial Oil Ltd.
(ExxonMobil Canada) in the mid 1990’s but was later abandoned for a traditional SLP approach.
Another less known advantage of SLP is the fact that it can exploit the exceptional power of
commercial LP sub-solvers at each SLP major iteration with their powerful pre-processing
capability, advanced crash basis techniques, highly efficient linear algebra exploiting hyper-
sparsity, etc. SQP’s on the other hand use somewhat less efficient basis factorization packages
and unfortunately do not (and cannot) take advantage of the recent advances in LP technology.
Small Test Problem – See Appendices B and C
This small test problem was modified from a linear programming test by simply multiplying each
variable by itself as follows:
Maximize: F3 = -1*x1*x1 - 6*x2*x2 + 7*x3*x3 - x4*x4 - 5*x5*x5
Subject to: F1 = 5*x1*x1 – 4*x2*x2 + 13*x3*x3 – 2*x4*x4 + x5*x5 – 20 = 0
F2 = x1*x1 – x2*x2 + 5*x3*x3 – x4*x4 + x5*x5 – 8 = 0
0 <= x1, x2, x3, x4, x5 <= 10
This is a non-convex problem because it is nonlinear (bilinear) and has at least one constraint
with plus and minus nonlinear terms and has exhibited more than one local optima (-28.0 and
8.0) with the largest or global maximum equal to 8.57143 with the following solution:
x1 = 0.0
x2 = 0.75593
x3 = 1.30931
x4 = 0.0
x5 = 0.0
Supposedly this problem was solved with an early version of Shell’s SQP called OPERA and
was compared to MINOS where MINOS was assumed to be similar to SLP and failed to solve
this problem adequately (Renfro, 2010). This assumption on the surface seems reasonable and
was also made in Poku et. al. (2004) but MINOS and SLP are structurally different.
Unfortunately, a commercially available SLP code was not available until 2003 with Dash
Optimization’s (now FICO) Xpress-SLP (with startup funding partially provided by Honeywell)
and thus making a true comparison of SQP with SLP difficult.
If we revisit this small test problem using IPOPT as our SQP, it solves with an average of about
30 iterations from one hundred (100) randomized starting-points and finds the global optimum
100% of the time. Comparing to SLPQPE using an LP sub-solver, it solves with an average of
3. 10 major iterations and finds the global optimum only approximately 50% of the time. However,
if we modify the objective to be a minimization of an explicit quadratic function (see Appendix C)
then IPOPT still finds the global optimum 100% of the time but with an average of 90 iterations
while SLPQPE using a QP sub-solver finds the global optimum 93% of the time with an average
number of major iterations of 11.
Discussion
As presented above, SQP (IPOPT) is able to find the global optimum reliably for both
formulations of the small test problem which confirms that the SQP is appropriate for these
types of problems. SLPQPE using an LP sub-solver was only able to find the global optimum
50% of the time but it was able to find the solution in a reasonable number of major iterations.
Using an explicit quadratic objective function formulation, again IPOPT was able to find the
global optimum 100% of the time but with an increase in the number of its iterations whereas
SLPQPE using a QP sub-solver significantly improved its chances of finding the global optimum
from 50% to 93% with a slight increase in the number of major iterations.
Now a word with regard to the practical-ness of the small test problem in terms of being a
representative process engineering optimization type of problem. Although it has variable times
itself which as mentioned can be found in both constitutive relations and correlative equations
creating diagonal elements in the Hessian of the Lagrange function (which most likely yield
super-basic variables), this is not a true process engineering problem given that it was
generated from a simple LP test problem by arbitrarily squaring each variable in each
expression. Other types of constraints such as conservation laws (usually extensive times
intensive variables forming off-diagonal elements in the Hessian) are not properly represented
in this problem which makes a true generalization that SQP is better than SLP for process
optimization incomplete. In fact, when we modify the formulation to solve with a QP sub-solver
in SLPQPE, its performance is very strong compared to IPOPT for example.
Therefore, we can conclude that generally accepting that SQP is “better” for process
optimization is perhaps not as justified as we once thought and that SLP with an LP or QP sub-
solver is a strong competitor to SQP even for process-oriented and not just production-oriented
types of problems.
With IMPL we also believe that having access to many types of solvers and sub-solvers is
paramount to having robust, accurate and fast solutions to difficult process industry types of
problems whether they are planning, scheduling, control, data reconciliation, etc. and that is why
IMPL has bindings to all types of third-party nonlinear (and mixed-integer) solvers. And, given
IMPL’s small system-architecture footprint (SIIMPLE), it is very easy to implement a “poor man’s
parallelism” by running IMPL with different solvers/sub-solvers on as many computer processors
as available further increasing the reliability, precision and speed of finding good solutions to
industrial optimization problems with significant benefits (Kelly et. al. 2014).
In addition, IMPL allows for data reconciliation and regression problems to be solved using the
same model to perform either economic and/or efficiency-oriented optimizations. Coupled with
measurement and parameter feedback (Kelly and Zyngier, 2008b), whereby the “coefficients” in
the correlative equations are fit or estimated either using off-line or on-line data, affords the use
of simpler types of sub-models which can be considered as “hybrid” modeling i.e., combining
engineering and estimated empirical models together. An advantage of IMPL for this type of
parameter estimation is that IMPL computes not only the values but their variances for any size
4. and complexity of flowsheet enabling gross-errors, defects, faults, outliers, anomalies, etc. to be
detected and isolated effectively.
References
Griffith, R.E., Stewart, R.A., “A nonlinear programming technique for the optimization continuous
processing systems”, Management Science, 7, 379, (1961).
Palacios-Gomez, F., Lasdon, L., Engquist, M., “Nonlinear optimization by successive linear
programming”, Management Science, 28, 1106, (1982).
Zhang, J., Kim, N-H., Lasdon, L., “An improved successive linear programming algorithm”,
Management Science, 31, 1312-1331, (1985).
Poku, M.Y.B., Biegler, L.T., Kelly, J.D., “Nonlinear optimization with many degrees of freedom in
process engineering”, Industrial & Engineering Chemistry Research, 43, 6803-6812, (2004).
Kelly, J.D., "Formulating large-scale quantity-quality bilinear data reconciliation problems",
Computers & Chemical Engineering, 28, 357, (2004).
Kelly, J.D., Zyngier, D., "Hierarchical decomposition heuristic for scheduling: coordinated
reasoning for decentralized and distributed decision-making problems", Computers & Chemical
Engineering, 32, 2684, (2008a).
Kelly, J.D., Zyngier, D., "Continuously improve planning and scheduling models with parameter
feedback", FOCAPO 2008, July, (2008b).
Zyngier, D., Kelly, J.D., "Multi-product inventory logistics modeling in the process industries", In:
W. Chaovalitwonse, K.C. Furman and P.M. Pardalos, Eds., Optimization and Logistics
Challenges in the Enterprise", Springer, 61-95, (2009).
Renfro, J.G., personal communication, (2010).
Pantelides, C.C., Renfro, J.G., "The online use of first-principles in process operations: review,
current status & future trends", FOCAPO/CPC 2012, January, (2012).
Kelly, J.D., Menezes, B.C., Grossmann, I.E., “Distillation blending and cutpoint temperature
optimization using monotonic interpolation”, submitted to Industrial & Engineering Chemistry
Research, (2014).
Kelly, J.D., Zyngier, D., "Unit operation nonlinear modeling for planning and scheduling
applications", K.C. Furman et.al. (eds.), Optimization and Analytics in the Oil & Gas Industries,
(2015) (accepted).
Appendix A - IMPL and SIIMPLE
To implement the mathematical formulation of this and other systems, IAL offers a unique
approach and is incorporated into our Industrial Modeling Programming Language we call IMPL.
IMPL has its own modeling language called IML (short for Industrial Modeling Language) which
is a flat or text-file interface as well as a set of API's which can be called from any computer
programming language such as C, C++, Fortran, C#, VBA, Java (SWIG), Python (CTYPES)
5. and/or Julia (CCALL) called IPL (short for Industrial Programming Language) to both build the
model and to view the solution. Models can be a mix of linear, mixed-integer and nonlinear
variables and constraints and are solved using a combination of LP, QP, MILP and NLP solvers
such as COINMP, GLPK, LPSOLVE, SCIP, CPLEX, GUROBI, LINDO, XPRESS, CONOPT,
IPOPT, KNITRO and WORHP as well as our own implementation of SLP called SLPQPE
(Successive Linear & Quadratic Programming Engine) which is a very competitive alternative to
the other nonlinear solvers and embeds all available LP and QP solvers.
In addition and specific to DRR problems, we also have a special solver called SECQPE
standing for Sequential Equality-Constrained QP Engine which computes the least-squares
solution and a post-solver called SORVE standing for Supplemental Observability, Redundancy
and Variability Estimator to estimate the usual DRR statistics. SECQPE also includes a
Levenberg-Marquardt regularization method for nonlinear data regression problems and can be
presolved using SLPQPE i.e., SLPQPE warm-starts SECQPE. SORVE is run after the
SECQPE solver and also computes the well-known "maximum-power" gross-error statistics
(measurement and nodal/constraint tests) to help locate outliers, defects and/or faults i.e., mal-
functions in the measurement system and mis-specifications in the logging system.
The underlying system architecture of IMPL is called SIIMPLE (we hope literally) which is short
for Server, Interfacer (IML), Interacter (IPL), Modeler, Presolver Libraries and Executable. The
Server, Presolver and Executable are primarily model or problem-independent whereas the
Interfacer, Interacter and Modeler are typically domain-specific i.e., model or problem-
dependent. Fortunately, for most industrial planning, scheduling, optimization, control and
monitoring problems found in the process industries, IMPL's standard Interfacer, Interacter and
Modeler are well-suited and comprehensive to model the most difficult of production and
process complexities allowing for the formulations of straightforward coefficient equations,
ubiquitous conservation laws, rigorous constitutive relations, empirical correlative expressions
and other necessary side constraints.
User, custom, adhoc or external constraints can be augmented or appended to IMPL when
necessary in several ways. For MILP or logistics problems we offer user-defined constraints
configurable from the IML file or the IPL code where the variables and constraints are
referenced using unit-operation-port-state names and the quantity-logic variable types. It is also
possible to import a foreign *.ILP file (row-based MPS file) which can be generated by any
algebraic modeling language or matrix generator. This file is read just prior to generating the
matrix and before exporting to the LP, QP or MILP solver. For NLP or quality problems we offer
user-defined formula configuration in the IML file and single-value and multi-value function
blocks writable in C, C++ or Fortran. The nonlinear formulas may include intrinsic functions
such as EXP, LN, LOG, SIN, COS, TAN, MIN, MAX, IF, NOT, EQ, NE, LE, LT, GE, GT and CIP,
LIP, SIP and KIP (constant, linear and monotonic spline interpolations) as well as user-written
extrinsic functions (XFCN). It is also possible to import another type of foreign file called the
*.INL file where both linear and nonlinear constraints can be added easily using new or existing
IMPL variables.
Appendix B – SQPtestRTO1.IML File
i M P l (c)
Copyright and Property of i n d u s t r I A L g o r i t h m s LLC.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Calculation Data (Parameters)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sCalc,@sValue
START,-1.0
6. BEGIN,0.0
END,1.0
PERIOD,1.0
&sCalc,@sValue
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Chronological Data (Periods)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@rPastTHD,@rFutureTHD,@rTPD
START,END,PERIOD
@rPastTHD,@rFutureTHD,@rTPD
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Construction Data (Pointers)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sUnit,&sOperation,@sType,@sSubtype,@sUse
BLACKBOX,,processc,blackbox,
&sUnit,&sOperation,@sType,@sSubtype,@sUse
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Capacity Data (Prototypes)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sUnit,&sOperation,@rRate_Lower,@rRate_Upper
BLACKBOX,,0.0,0.0
&sUnit,&sOperation,@rRate_Lower,@rRate_Upper
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Condition Data (Properties)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sCondition
X1
X2
X3
X4
X5
F1
F2
F3
&sCondition
ConditionsUOCondition-&sUnit,&sOperation,&sCondition,@sType,@rValue,@sValue
BLACKBOX,,F1,?,3,5.0*X1*X1 - 4.0*X2*X2 + 13.0*X3*X3 - 2.0*X4*X4 + X5*X5 - 20.0
BLACKBOX,,F2,?,3,X1*X1 - X2*X2 + 5.0*X3*X3 - X4*X4 + X5*X5 - 8.0
BLACKBOX,,F3,?,3,-X1*X1 - 6.0* X2*X2 + 7.0*X3*X3 - X4*X4 - 5.0*X5*X5
ConditionsUOCondition-&sUnit,&sOperation,&sCondition,@sType,@rValue,@sValue
&sUnit,&sOperation,&sCondition,@rCondition_Lower,@rCondition_Upper,@rCondition_Target
BLACKBOX,,X1,0.0,10.0,
BLACKBOX,,X2,0.0,10.0,
BLACKBOX,,X3,0.0,10.0,
BLACKBOX,,X4,0.0,10.0,
BLACKBOX,,X5,0.0,10.0,
BLACKBOX,,F1,0.0,0.0,
BLACKBOX,,F2,0.0,0.0,
BLACKBOX,,F3,-100.0,100.0,
&sUnit,&sOperation,&sCondition,@rCondition_Lower,@rCondition_Upper,@rCondition_Target
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Cost Data (Pricing)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sUnit,&sOperation,&sCondition,@rConditionPro_Weight,@rConditionPer1_Weight,@rConditionPer2_Weight,@rConditionPen_Weight
BLACKBOX,,F3,1.0,,,
&sUnit,&sOperation,&sCondition,@rConditionPro_Weight,@rConditionPer1_Weight,@rConditionPer2_Weight,@rConditionPen_Weight
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Command Data (Future Provisos)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sUnit,&sOperation,@rSetup_Lower,@rSetup_Upper,@rBegin_Time,@rEnd_Time
BLACKBOX,,1,1,BEGIN,END
&sUnit,&sOperation,@rSetup_Lower,@rSetup_Upper,@rBegin_Time,@rEnd_Time
Appendix C – SQPtestRTO2.IML File
i M P l (c)
Copyright and Property of i n d u s t r I A L g o r i t h m s LLC.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Calculation Data (Parameters)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sCalc,@sValue
START,-1.0
BEGIN,0.0
END,1.0
PERIOD,1.0
&sCalc,@sValue