Advanced Production Accounting (APA) uses statistical data reconciliation and regression to clean past production data when processes are assumed to be at steady-state. It defines a simultaneous mass and volume problem with density. This is depicted in an oil refinery flowsheet using the unit-operation-port-state superstructure (UOPSS). Key differences from prior work are that UOPSS uses "ports" to represent flows, requiring fewer quality measurements. Industrial Modeling Framework (IMF) implements the mathematical formulations using IMPRESS modeling language and solvers like SECQPE and SORVE for data reconciliation problems. IMFs provide pre-configured models for industrial projects.
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of what we call "Phasing" and "Planuling". Phasing is a variation of the sequence-dependent changeover problem (Kelly and Zyngier, 2007, Balas et. al., 2008) except that the sequencing, cycling or phasing is fixed as opposed to being variable or free. Planuling is a portmanteau of planning and scheduling where we "schedule" slow processes and we "plan" fast processes together inside the same time-horizon and can also be considered as "hybrid" planning and scheduling.
Generalized Capital Investment Planning w/ Sequence-Dependent Setups Industri...Alkis Vazacopoulos
Presented in this short document is a description of what we call the “Generalized” Capital Investment Planning (GCIP) problem where conventional capital investment planning (CIP), and specifically for the “retrofit” problem, is discussed in Sahinidis and Grossmann (1989) and Liu and Sahinidis (1996). CIP is the optimization problem where it is desired to expand the capacity and/or extend the capability (conversion) of either the “expansion” of an existing unit or the “installation” of a new unit (Jackson and Grossmann, 2002).
Figure 1 shows the three types of CIP problems as defined in Vazacopoulos et. al. (2014) and Menezes (2014) with its capital cost and time scales.
Presented in this short document is a description of what is called Advanced Process Monitoring (APM) as described by Hedengren (2013). APM is the term given to the technique of estimating unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment and is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid).
Advanced Process Monitoring for Startups, Shutdowns & Switchovers Industrial ...Alkis Vazacopoulos
Presented in this short document is a description of what is called “Advanced” Process Monitoring as described by Hedengren (2013) but related to Startups, Shutdowns and Switchovers-to-Others (APM-SUSDSO). APM is the term given to the technique of estimating or fitting unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment. It is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996) in Advanced Process Control (APC) which goes beyond simply updating a bias to implement some form of measurement or parameter feedback (Kelly and Zyngier, 2008b). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid) (Pantelides and Renfro, 2012).
R2RML-F: Towards Sharing and Executing Domain Logic in R2RML MappingsChristophe Debruyne
Christophe Debruyne and Declan O'Sullivan: R2RML-F: Towards Sharing and Executing Domain Logic in R2RML Mappings
Paper presented at Linked Data on the Web (LDOW2016, collocated with WWW2016)
http://events.linkeddata.org/ldow2016/papers/LDOW2016_paper_14.pdf
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of what we call "Phasing" and "Planuling". Phasing is a variation of the sequence-dependent changeover problem (Kelly and Zyngier, 2007, Balas et. al., 2008) except that the sequencing, cycling or phasing is fixed as opposed to being variable or free. Planuling is a portmanteau of planning and scheduling where we "schedule" slow processes and we "plan" fast processes together inside the same time-horizon and can also be considered as "hybrid" planning and scheduling.
Generalized Capital Investment Planning w/ Sequence-Dependent Setups Industri...Alkis Vazacopoulos
Presented in this short document is a description of what we call the “Generalized” Capital Investment Planning (GCIP) problem where conventional capital investment planning (CIP), and specifically for the “retrofit” problem, is discussed in Sahinidis and Grossmann (1989) and Liu and Sahinidis (1996). CIP is the optimization problem where it is desired to expand the capacity and/or extend the capability (conversion) of either the “expansion” of an existing unit or the “installation” of a new unit (Jackson and Grossmann, 2002).
Figure 1 shows the three types of CIP problems as defined in Vazacopoulos et. al. (2014) and Menezes (2014) with its capital cost and time scales.
Presented in this short document is a description of what is called Advanced Process Monitoring (APM) as described by Hedengren (2013). APM is the term given to the technique of estimating unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment and is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid).
Advanced Process Monitoring for Startups, Shutdowns & Switchovers Industrial ...Alkis Vazacopoulos
Presented in this short document is a description of what is called “Advanced” Process Monitoring as described by Hedengren (2013) but related to Startups, Shutdowns and Switchovers-to-Others (APM-SUSDSO). APM is the term given to the technique of estimating or fitting unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment. It is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996) in Advanced Process Control (APC) which goes beyond simply updating a bias to implement some form of measurement or parameter feedback (Kelly and Zyngier, 2008b). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid) (Pantelides and Renfro, 2012).
R2RML-F: Towards Sharing and Executing Domain Logic in R2RML MappingsChristophe Debruyne
Christophe Debruyne and Declan O'Sullivan: R2RML-F: Towards Sharing and Executing Domain Logic in R2RML Mappings
Paper presented at Linked Data on the Web (LDOW2016, collocated with WWW2016)
http://events.linkeddata.org/ldow2016/papers/LDOW2016_paper_14.pdf
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Design of Real - Time Operating System Using Keil µVision Ideiosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
Focusing on Advanced Point Merge SystemAnnieLiang17
Research about Advanced PMS automation for busy airport with parallel runways
Idea of PMS for squeezing A380 at Dubai airport
Idea of Total PMS for Beijing Daxing airport with 4 runways
REVIEW ON MODELS FOR GENERALIZED PREDICTIVE CONTROLLERcscpconf
Keeping of vehicles on track under non-linear dynamics conditions is important for unmanned
navigation, because it saves fuel and journey time. Keeping this in view, an efficient model is
required for controller that incorporates non-linear dynamics. Currently researchers are using
models like “A-R”,“M-A”, “ARMA”, “ARMA with Exogenous Input”, to improve the accuracy
of tracking, but still drawback exists because of identical disturbance random sequence and
excessive control effort. Hence “ARIMA” model is used which overcomes these disadvantages.
This paper discusses design details of “ARIMA” model along with comparisons of other models
used for ship tracking
CGSL, one of the largest circuit breaker manufacturers in China, has been recognized as Your Expert for Circuit Protection. This CGSL circuit breakers catalogue is published on 2014, it includes products: Miniature circuit breaker (also contains DC circuit breaker, PV circuit breaker, Hydraulic magnetic circuit breaker), Earth leakage circuit breaker, Molded case circuit breaker (includes Motor protection circuit breaker), Auxiliary products (Circuit breaker panels, Circuit breaker lockout, Circuit breaker finder). Check it NOW, you’ll find what you want!
Nuove soluzioni tecnologiche e politiche locali per il benessere dei cittadin...Margot Bezzi
La strategia e la visione della Commissione europea (DG CONNECT) sulla sostenibilità dei servizi socio-sanitari in europa, nel contesto dell'invecchiamento demografico.
Our Industrial Modeling Service (IMS) involves several important (but rarely implemented) methods to significantly improve and advance your existing models and data. Since it is well-known that good decision-making requires good models and data, IMS is ideally suited to support this continuous-improvement endeavour. IMS is specifically designed to either co-exist with your existing design, planning, scheduling, etc. applications or these same models and data can be used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create new value-added applications. The following techniques form the basis of our IMS offering.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Design of Real - Time Operating System Using Keil µVision Ideiosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
Focusing on Advanced Point Merge SystemAnnieLiang17
Research about Advanced PMS automation for busy airport with parallel runways
Idea of PMS for squeezing A380 at Dubai airport
Idea of Total PMS for Beijing Daxing airport with 4 runways
REVIEW ON MODELS FOR GENERALIZED PREDICTIVE CONTROLLERcscpconf
Keeping of vehicles on track under non-linear dynamics conditions is important for unmanned
navigation, because it saves fuel and journey time. Keeping this in view, an efficient model is
required for controller that incorporates non-linear dynamics. Currently researchers are using
models like “A-R”,“M-A”, “ARMA”, “ARMA with Exogenous Input”, to improve the accuracy
of tracking, but still drawback exists because of identical disturbance random sequence and
excessive control effort. Hence “ARIMA” model is used which overcomes these disadvantages.
This paper discusses design details of “ARIMA” model along with comparisons of other models
used for ship tracking
CGSL, one of the largest circuit breaker manufacturers in China, has been recognized as Your Expert for Circuit Protection. This CGSL circuit breakers catalogue is published on 2014, it includes products: Miniature circuit breaker (also contains DC circuit breaker, PV circuit breaker, Hydraulic magnetic circuit breaker), Earth leakage circuit breaker, Molded case circuit breaker (includes Motor protection circuit breaker), Auxiliary products (Circuit breaker panels, Circuit breaker lockout, Circuit breaker finder). Check it NOW, you’ll find what you want!
Nuove soluzioni tecnologiche e politiche locali per il benessere dei cittadin...Margot Bezzi
La strategia e la visione della Commissione europea (DG CONNECT) sulla sostenibilità dei servizi socio-sanitari in europa, nel contesto dell'invecchiamento demografico.
Our Industrial Modeling Service (IMS) involves several important (but rarely implemented) methods to significantly improve and advance your existing models and data. Since it is well-known that good decision-making requires good models and data, IMS is ideally suited to support this continuous-improvement endeavour. IMS is specifically designed to either co-exist with your existing design, planning, scheduling, etc. applications or these same models and data can be used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create new value-added applications. The following techniques form the basis of our IMS offering.
Artikel dohmen van deurssen van de rijt en van raaij evaluatie pilot achmea; ...Jeroen Van de Rijt
In het septembernummer van Deal! beschreven Peter Dohmen, Erik van Raaij en Jeroen van de Rijt de achtergrond en opzet van de pilot Prestatieinkoop binnen Achmea. Als eerste ter wereld past Achmea het gedachtegoed van Best Value toe bij het inkopen van zorg. Inmiddels is de selectiefase afgerond en zijn er 9 aanbieders gecontracteerd. In dit artikel worden de resultaten van de pilot gepresenteerd. Dit gebeurt, conform de Best Value aanpak, zo veel mogelijk met metrics.
Unit-Operation Nonlinear Modeling for Planning and Scheduling ApplicationsAlkis Vazacopoulos
The focus of this chapter is to detail the quantity and quality modeling aspects of production flowsheets found in all process industries. Production flowsheets are typically at a higher-level than process flowsheets given that in many cases more direct business or economic related decisions are being made such as maximizing profit and performance for the overall plant and/or for several integrated plants together with shared resources. These decisions are usually planning and scheduling related, often referred to as production control, which require a larger spatial and temporal scope compared to more myopic process flowsheets which detail the steady or unsteady-state material, energy and momentum balances of a particular process unit-operation over a relatively short time horizon. This implies that simpler but still representative mathematical models of the individual processes are necessary in order to solve the multi time-period nonlinear system using nonlinear optimizers such as successive linear programming (SLP) and sequential quadratic programming (SQP). In this chapter we describe six types of unit-operation models which can be used as fundamental building blocks or objects to formulate large production flowsheets. In addition, we articulate the differences between continuous and batch processes while also discussing several other important implementation issues regarding the use of these unit-operation models within a decision-making system. It is useful to also note that the quantity and quality modeling system described in this chapter complements the quantity and logic modeling used to describe production and inventory systems outlined in Zyngier and Kelly (2009).
Presented in this short document is a description of what we call "Advanced" Property Tracking or Tracing (APT). APT is the term given to the technique of predicting, simulating, calculating or estimating the properties (i.e., densities, compositions, conditions, qualities, etc.) in a network or superstructure with significant inventory using statistical data reconciliation and regression (DRR)
Presented in this short document is a description of what is called the (classic) “Pooling Optimization Problem” and was first described in Haverly (1978) where he modeled a small distillate blending problem with three component materials (A, B, C), one pool for mixing or blending of only two components, two products (P1, P2) and one property (sulfur, S) as well as only one time-period. The GAMS file of this exact same problem is found in Appendix A which describes all of the sets, lists, parameters, variables and constraints required to represent this problem. Related types of NLP sub-models can also be found in Kelly and Zyngier (2015) where they formulate other sub-types of continuous-processes such as blenders, splitters, separators, reactors, fractionators and black-boxes for adhoc or custom sub-models.
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...Alkis Vazacopoulos
Presented in this short document is a description of how to model and solve advanced parameter estimation (APE) problems in IMPL. APE is the term given to the application of estimating, fitting or calibrating parameters in models involving a network, topology, superstructure or flowsheet. When estimating parameters with multiple linear regression (MLR), ordinary least squares (OLS), ridge regression (RR), principal component regression (PCR) and partial least squares (PLS) there is no explicit model but simply an X-block and Y-block of data. Hence, these methods are referred to as “non-parametric” or “data-based” methods as opposed to the “parametric” or “model-based” method used here. To solve these types of problems we use what is commonly referred to as “error-in-variables” (EIV) regression which is conveniently implemented as nonlinear data reconciliation and regression (NDRR) using the technology found in Kelly (1998a; 1998b; 1999) and Kelly and Zyngier (2008a). The primary benefit of using EIV (NDRR) over the other regression methods is that we can easily handle the inclusion of conservation laws and constitutive relations, explicitly, a must for any industrial estimation problem (IEP).
Presented in this short document is a description of modeling and solving partial differential equations (PDE’s) in both the temporal and spatial dimensions using IMPL. The sample PDE problem is taken from Cutlip and Shacham (1999 and 2014) and models the process of unsteady-state heat transfer or conduction in a one dimensional (1D) slab with one face insulated and constant thermal conductivity as discussed by Geankoplis (1993).
The IML file is our user readable import or input file to the IMPL modeling and solving platform. IMPL is an acronym for Industrial Modeling and Programming Language provided by Industrial Algorithms LLC. The IML file allows the user to configure the necessary data to model and solve large-scale and complex industrial optimization problems (IOP's) such as planning, scheduling, control and data reconciliation and regression in either off or on-line environments.
The data configurable in the IML file are broken-down into several categories or classes where these data categories are used as further sections in this basic reference manual. This reference manual is specific only to the quantity dimension of what we refer to as the Quantity-Logic-Quality Phenomena (QLQP). The QLQP provides a useful phenomenological break-down of the problem complexity where the quantity dimension details quantities such as flows, rates, holdups and yields where the quantities can be related to any stock or signal including time. The other two dimensions are not the focus of this documentation but for completeness of the description, logic data have setups, startups, switchovers-to-itself, shutdowns and switchover-to-others (sequence-dependent transitions) and quality data have densities, components, properties and conditions. In addition to the QLQP , we also have what we call the Unit-Operation-Port-State Superstructure (UOPSS). This provides the flowsheet or topology of the IOP in terms of the various shapes, constructs or objects necessary to configure it. The UOPSS is more than a single network given that it is comprised of two networks we call the "physical" network and the "procedural" network. The physical network involves the units and ports (equipment, structural) and the procedural network involves the operations and states (activities, functional). The combination or cross-product of the two derives the "projectional" superstructure and it is these superstructure constructs or UOPSS keys that we apply, attach or associate specific QLQP attributes where projections are also known as hypothetical, logical or virtual constructs. Ultimately, when we augment the superstructure with the time or temporal dimension as well as including multiple sites or echelons i.e., sub-superstructures, we essentially are configuring what is known as a "hyperstructure".
Time Series Estimation of Gas Furnace Data in IMPL and CPLEX Industrial Model...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate a deterministic and stochastic time-series transfer function models in IMPL using IBM’s CPLEX applied to industrial gas furnace data. The methodology of time-series analysis involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment, auto-, cross- and partial-correlation, etc. to establish the transfer function polynomial degrees. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s nonlinear programming algorithm SLPQPE which uses CPLEX 12.6 as the QP sub-solver.
Presented in this short document is a description of what we call "Partitioning" and "Positioning". Partitioning is the notion of decomposing the problem into smaller sub-problems along its “hierarchical” (Kelly and Zyngier, 2008), “structural” (Kelly and Mann, 2004), “operational” (Kelly, 2006), “temporal” (Kelly, 2002) and now “phenomenological” (Kelly, 2003, Kelly and Mann, 2003, Kelly and Zyngier, 2014 and Menezes, 2014) dimensions. Positioning is the ability to configure the lower and upper hard bounds and target soft bounds for any time-period over the future time-horizon within the problem or sub-problem and is especially useful to fix variables (i.e., its lower and upper bounds are set equal) which will ultimately remove or exclude these variables from the solver’s model or matrix.
Presented in this short document is a description of what is called a “Pipeline Scheduling Optimization Problem” and was first described in Rejowski and Pinto (2003) where they modeled the first-in-first-out (FIFO) and multi-product nature of the segregated pipeline using both discretized space (multi-batches, packs or pipes) and time (multi-intervals, slots or periods). The same MILP model can also be found in Zyngier and Kelly (2009) along with other related production/process objects.
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate deterministic and stochastic non-parametric finite impulse response (FIR) models in IMPL applied to industrial gas furnace data identical to that found in TSE-GFD-IMF using parametric transfer-functions. The methodology of time-series analysis or system identification involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment/adjustment, auto-, cross- and partial-correlation, etc. to establish the parametric transfer function polynomial degrees especially when we are using non-parametric FIR estimation. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s Sequential Equality-Constrained QP Engine (SECQPE) and Supplemental Observability, Redundancy and Variability Estimator (SORVE). Other types of non-parametric identification known as Subspace Identification (Qin, 2006) and can used to estimate state-space models.
Quick Development and Deployment of Industrial Applications using Excel/VBA, ...Alkis Vazacopoulos
Presented in this document is a description of how to develop and deploy industrial applications in a timely fashion using Excel/VBA as the user-interface (UI) and systems-integration (SI) system, IMPL as the industrial modeller and CPLEX as the commercial solver. A small jobshop scheduling example is overviewed to help describe to some extent, the details of this advanced decision-making application where this type of problem can be found in both the manufacturing and process industries.
The purpose of developing and deploying quickly is to acquire feedback from the end-users, to assess the difficulty and tractability of the problem, to ascertain the expected costs and benefits of the application and to address any other issues and requirements regarding the project as a whole as soon as possible. For some projects, proof-of-concepts, prototypes and/or pilots are also useful and these should also be performed ASAP as well using the same approach highlighted here. Ultimately, once a business problem solution has been achieved and full or partial benefits have been captured, then a more robust and sophisticated end-user experience and system architecture can be implemented in the operating system and computer programming environment of choice which will hopefully enhance and maintain the solution over its expected life-cycle.
We tested ODH|CPLEX 4.24 on Miplib Open-v7 Models, a public collection of 286 models to which and optimal solution has not been proven. 257 of these are known to have a feasible solution.
ODH|CPLEX proved optimality on 6 models and found better solutions in 2 hours, to 40% of the models with 12 threads and 35% with 8 threads. ODH|CPLEX matched on 21% of the models.
EX Optimization Studio* solves large-scale optimization problems and enables better business decisions and resulting financial benefits in areas such as supply chain management, operations, healthcare, retail, transportation, logistics and asset management. It has been applied in sectors as diverse as manufacturing, processing, distribution, retailing, transport, finance and investment. CPLEX Optimization Studio is an analytical decision support toolkit for rapid development and deployment of optimization models using mathematical and constraint programming. It combines an integrated development environment (IDE) with the powerful Optimization Programming Language (OPL) and high-performance ILOG CPLEX optimizer solvers. CPLEX Optimization Studio enables clients to: Optimize business decisions with high-performance optimization engines. Develop and deploy optimization models quickly by using flexible interfaces and prebuilt deployment scenarios. Create real-world applications that can significantly improve business outcomes. Optimization Direct has partnered with and entered into a technology licensing and distribution agreement with IBM. By combining the founders' industry and software experience and IBM’s CPLEX Optimization Studio product with the arsenal of Optimization modeling and solving tools from IBM provides customers the most powerful capabilities in the industry.
Missing-Value Handling in Dynamic Model Estimation using IMPL Alkis Vazacopoulos
Presented in this short document is a description of how IMPL handles missing-values or missing-data when estimating dynamic models which inherently involve time-lagged or time-shifted input and output variables. Missing-values in a data set imply that for some reason the data is not available most likely due to a mal-functioning instrument or even lack of proper accounting. Missing-data handling is relatively well-studied especially for time-series or dynamic data given that it is not as easy as removing, ignoring or deleting bad sections of data when static or steady-state models are calibrated (Honaker and King, 2010; Smits and Baggelaar, 2010; Fisher and Waclawski, 2015). Unfortunately, all of their methods involve what is known as “imputation” i.e., replacing or substituting missing-data with some reasonably assumed value which is at the very least is a biased estimate. When regression techniques such as PLS and PCR are used (Nelson et. al., 2006) then missing-data can be handled without imputation by computing the input-output covariance matrices excluding the contribution from the missing-values given the temporal and structural redundancy in the system. However, it is shown in Dayal (1996) that using PLS and other types of regression techniques such as Canonical Correlation Regression (CCR) and Reduced Rank Regression (RRR) to fit non-parsimonious and non-parametric finite impulse/step response models (FIR/FSR), that this is not as reliable as fitting lower-ordered transfer functions especially considering the robust stability of the resulting model predictive controller if that is its intended use.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
Presented in this short document is a description of how to model and solve multi-utility scheduling optimization (MUSO) problems in IMPL. Multi-utility systems (co/tri-generation) are typically found in petroleum refineries and petrochemical plants (multi-commodity systems) especially when fuel-gas (i.e., off-gases of methane and ethane) is a co- or by-product of the production from which multi-pressure heating-, motive- and process-steam are generated on-site. Other utilities include hydrogen, electricity, water, cooling media, air, nitrogen, chemicals, etc. where a multi-utility system is shown in Figure 1 with an intermediate or integrated utility (both produced and consumed) such as fuel-gas, steam or electricity. Itemized benefit areas just for better management of an integrated steam network can be found in Pelham (2013) where his sample multi-pressure steam utility flowsheet is found in Figure 2.
Presented in this short document is a description of what is well-known as Advanced Process Control (APC) applied to a small linear three (3) manipulated variable (MV) by two (2) controlled variable (CV) problem. These problems are also known as Model Predictive Control (MPC) (Grimm et. al., 1989) and Moving Horizon Control (MHC). Figure 1 shows the 3 x 2 APC problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012) as an Advanced Planning and Scheduling (APS) problem as opposed to a traditional APC problem.
Although there is a tremendous amount of stability, performance and robustness theory associated with APC which can be directly assumed to APS problems (Mastragostino et. al., 2014), our approach is to show that APC can equally be set into an APS framework except that APS has far less sensitivity technology due to its inherent discrete and nonlinear modeling complexities i.e., especially non-convexities. In order to eliminate the steady-state offset between the actual value and its target, it is well-known to apply bias-updating though other forms of “parameter-feedback” is possible. Typically, APS applications only employ “variable-feedback” i.e., opening or initial inventories, properties, etc. but this alone will not alleviate the steady-state offset as demonstrated by Kelly and Zyngier (2008).
Presented in this short document is a description of our three separate techniques to analyze the data by checking, clustering and componentizing it before it is used by other IMPL’s routines especially in on-line/real-time decision-making applications. We also have other data consistency or analysis techniques which have been described in other IMPL documents and these relate to the application of data reconciliation and regression with diagnostics but require an explicit model (model-based) whereas the techniques below do not i.e., they are data-based techniques.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Advanced Production Accounting
1. Advanced Production Accounting
Industrial Modeling Framework (APA-IMF)
i n d u s t r IAL g o r i t h m s LLC. (IAL)
www.industrialgorithms.com
July 2013
Introduction to Advanced Production Accounting, UOPSS and QLQP
Presented in this short document is a description of what we call "Advanced" Production
Accounting (APA). APA is the term given to the technique of vetting, screening or cleaning the
past production data using statistical data reconciliation and regression (DRR) when
continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e.,
there is no significant material accumulation. Essentially, the model and data define a
simultaneous mass and volume with density DRR problem. Figure 1 depicts a relatively small
production accounting flowsheet problem configured in our unit-operation-port-state
superstructure (UOPSS) (Kelly, 2004a, 2005, and Zyngier and Kelly, 2012).
Figure 1. Oil-Refinery Production Accounting Flowsheet (Kelly and Mann, 2005).
The diamond shapes or objects are the sources and sinks known as perimeters, the triangle
shapes are the pools or tanks and the rectangle shapes with the cross-hairs are continuous-
2. process units and as mentioned these units should have a steady-state detection algorithm
(SSD) installed to determine if the units are steady or stationary. The circle shapes with no
cross-hairs are in-ports which can accept one or more inlet flows and are considered to be
simple or uncontrolled mixers. The cross-haired circles are out-ports which can allow one or
more outlet flows and are considered to be simple or uncontrolled splitters. The lines, arcs or
edges in between the various shapes are known as internal and external streams and represent
in this context the flows of materials from one shape to another. This example and its data are
taken directly from Kelly and Mann (2005) but is mapped to our UOPSS modeling framework
which includes only one time-period typically defined for one business or calendar day. A
related technique using multiple time-periods can be found in Kelly et. al. (2005) to trace or track
production qualities throughput any process network and is useful for real-time or on-line
monitoring applications as it involves dynamic DRR.
In this example, we have a crude-oil distillation unit (CRD), vacuum distillation unit (VAC),
fluidized catalytic cracking unit (FCC) and a catalytic reformer (REF) as well as twenty-four (24)
tanks for crude-oil, intermediate and final product storage. The continuous-process units only
conserve mass whereas all of the tanks conserve both mass and volume using density as the
conversion from volume to mass. There are five (5) perimeter units which represent pipeline
deliveries and liftings as well as a fuel gas burner export to a cogeneration utilities plant. For
this data set, there is no finished product blending that occurred over this production accounting
time-period and hence there is no blending header units shown.
The key difference between the modeling found in Kelly and Mann (2005) and our formulation is
that we use the concept of "ports" which allows for a more unambiguous and parsimonious
representation of the quantity, logic and quality phenomenological (QLQP) data. For instance,
on the CRD at out-port JVAC there are two flows out (quantity) simultaneously which requires
only one density (quality) measurement given that JVAC is an implied splitter. However, in the
Kelly and Mann (2005) formulation which does not employ the concept of ports, they require two
density measurements i.e., one for each stream out which requires more pre-processing of the
data to manage the density of each individual stream. The efficiency of UOPSS and QLQP is
that only one density measurement at JVAC needs to be configured and through the topology of
the superstructure, the necessary propagation of the out-port qualities is properly and
automatically handled.
Industrial Modeling Framework (IMF), IMPRESS and SIIMPLE
To implement the mathematical formulation of this and other systems, IAL offers a unique
approach and is incorporated into our Industrial Modeling and Pre-Solving System we call
IMPRESS. IMPRESS has its own modeling language called IML (short for Industrial Modeling
Language) which is a flat or text-file interface as well as a set of API's which can be called from
any computer programming language such as C, C++, Fortran, Java (SWIG), C# or Python
(CTYPES) called IPL (short for Industrial Programming Language) to both build the model and
to view the solution. Models can be a mix of linear, mixed-integer and nonlinear variables and
constraints and are solved using a combination of LP, QP, MILP and NLP solvers such as
COINMP, GLPK, LPSOLVE, SCIP, CPLEX, GUROBI, LINDO, XPRESS, CONOPT, IPOPT and
KNITRO as well as our own implementation of SLP called SLPQPE (Successive Linear &
Quadratic Programming Engine) which is a very competitive alternative to the other nonlinear
solvers and embeds all available LP and QP solvers.
In addition and specific to DRR problems, we also have a special solver called SECQPE
standing for Sequential Equality-Constrained QP Engine which computes the least-squares
3. solution and a post-solver called SORVE standing for Supplemental Observability, Redundancy
and Variability Estimator to estimate the usual DRR statistics found in Kelly (1998 and 2004b)
and Kelly and Zyngier (2008a). SECQPE also includes a Levenberg-Marquardt regularization
method for nonlinear data regression problems and can be presolved using SLPQPE i.e.,
SLPQPE warm-starts SECQPE. SORVE is run after the SECQPE solver and also computes
the well-known "maximum-power" gross-error statistics to help locate outliers, defects and/or
faults i.e., mal-functions in the measurement system and mis-specifications in the logging
system.
The underlying system architecture of IMPRESS is called SIIMPLE (we hope literally) which is
short for Server, Interacter (IPL), Interfacer (IML), Modeler, Presolver Libraries and Executable.
The Server, Presolver and Executable are primarily model or problem-independent whereas the
Interacter, Interfacer and Modeler are typically domain-specific i.e., model or problem-
dependent. Fortunately, for most industrial planning, scheduling, optimization, control and
monitoring problems found in the process industries, IMPRESS's standard Interacter, Interfacer
and Modeler are well-suited and comprehensive to model the most difficult of production and
process complexities allowing for the formulations of straightforward coefficient equations,
ubiquitous conservation laws, rigorous constitutive relations, empirical correlative expressions
and other necessary side constraints.
User, custom, adhoc or external constraints can be augmented or appended to IMPRESS when
necessary in several ways. For MILP or logistics problems we offer user-defined constraints
configurable from the IML file or the IPL code where the variables and constraints are
referenced using unit-operation-port-state names and the quantity-logic variable types. It is also
possible to import a foreign LP file (row-based MPS file) which can be generated by any
algebraic modeling language or matrix generator. This file is read just prior to generating the
matrix and before exporting to the LP, QP or MILP solver. For NLP or quality problems we offer
user-defined formula configuration in the IML file and single-value and multi-value function
blocks writable in C, C++ or Fortran. The nonlinear formulas may include intrinsic functions
such as EXP, LN, LOG, SIN, COS, TAN, MIN, MAX, IF, NOT, EQ, NE, LE, LT, GE, GT and KIP,
LIP, SIP (constant, linear and monotonic spline interpolation) as well as user-written extrinsic
functions.
Industrial modeling frameworks or IMF's are intended to provide a jump-start to an industrial
project implementation i.e., a pre-project if you will, whereby pre-configured IML files and/or IPL
code are available specific to your problem at hand. The IML files and/or IPL code can be
easily enhanced, extended, customized, modified, etc. to meet the diverse needs of your project
and as it evolves over time and use. IMF's also provide graphical user interface prototypes for
drawing the flowsheet as in Figure 1 and typical Gantt charts and trend plots to view the solution
of quantity, logic and quality time-profiles. Current developments use Python 2.3 and 2.7
integrated with open-source Dia and Matplotlib modules respectively but other prototypes
embedded within Microsoft Excel/VBA for example can be created in a straightforward manner.
However, the primary purpose of the IMF's is to provide a timely, cost-effective, manageable
and maintainable deployment of IMPRESS to formulate and optimize complex industrial
manufacturing systems in either off-line or on-line environments. Using IMPRESS alone would
be somewhat similar (but not as bad) to learning the syntax and semantics of an AML as well as
having to code all of the necessary mathematical representations of the problem including the
details of digitizing your data into time-points and periods, demarcating past, present and future
time-horizons, defining sets, index-sets, compound-sets to traverse the network or topology,
calculating independent and dependent parameters to be used as coefficients and bounds and
4. finally creating all of the necessary variables and constraints to model the complex details of
logistics and quality industrial optimization problems. Instead, IMF's and IMPRESS provide, in
our opinion, a more elegant and structured approach to industrial modeling and solving so that
you can capture the benefits of advanced decision-making faster, better and cheaper.
"Advanced" Production Accounting Synopsis
At this point we explore further the purpose of "advanced" production accounting in terms of its
diagnostic capability of aiding in the detection, identification and elimination of "bad" production
data where "bad" really implies inconsistent data. The major advantage of DRR is its ability to
use redundant data which is sometimes referred to as over-determined or over-specified
problems. The redundancy primarily occurs because of the inclusion of a model i.e., equations
or equality constraints relating flow, holdup and density variables together as in laws of
conservation of matter, energy and momentum. Some of these variables are measured or
reconciled, some are unmeasured or regressed while others are fixed or rigid. Measured
variables include a raw and known (finite) variance, unmeasured variables have a large and
unknown (infinite) variance and fixed variables have no or zero variance. The DRR objective
function is to minimize the weighted sum of squares of the raw measurements minus its
reconciled estimate where the weights are simply determined as the inverse of its raw variance
(Kelly, 1998). At a converged DRR solution using SECQPE we have estimates of the
reconciled and unmeasured or regressed variables and after running SORVE we have new
variance estimates for the reconciled and unmeasured or regressed variables as well as
redundancy and observability estimates for each measured and unmeasured variable
respectively. Furthermore, using these variances we can compute individual gross-error
detection statistics for the measured variables and equality constraints as well as confidence
intervals for each unmeasured variable using the Student-t tables to determine statistical
threshold or critical values. In addition, we can also compute a global or overall Hotelling
statistic on the objective function value to detect if at least one gross-error exists.
If we apply these techniques to the data set found in Kelly and Mann (2005) where the
flowsheet has been slightly modified to transform it into UOPSS, and there are no injected
gross-errors into the system, we arrive at an objective function of 34.87 with a Hotelling critical
value of 43.2 indicating that there are no detectable gross-errors. However, if we add a
significant bias, drift or offset to the density of pool T300 storing LPG of 0.05 i.e., the density
changes from 0.600 to 0.650, the objective function inflates to 334.64 where the Hotelling
statistic does not change. This indicates that at least one of the measurements is in gross-error
and/or there is a leak or unexpected flow in or out of one of the nodes. Using the individual
maximum-power measurement statistics we have three significant ones for the densities on the
"FCC,lpg" and "REF,lpg" out-ports as well as on "T300,LPG" of 17.315, 17.353 and 17.313
respectively which are very similar to those found in Table 3 of Kelly and Mann (2005).
Although it does not pinpoint "T300,LPG" exactly as the location of the gross-error it is able to
isolate the area, section or region of the flowsheet accurately to where the possible outlier may
exist which is very useful for large flowsheets. An interesting property or artifact of the
maximum-power measurement statistics is that if the measurement is deleted or removed i.e., is
made unmeasured, then the reduction in the weighted least-squares objective function will
equal the square of the maximum-power statistic. For example, 17.313^2 = 299.74 and when
we subtract this amount from 334.64 we get 334.64 - 299.74 = 34.90 which is very close to our
original objective function with no detectable gross-error of 34.87. Note that the reason it is
called the maximum-power statistic is due to the fact that if there is only one gross-error in the
system then this statistic will have the maximum-power or "maximum-probability" to detect that it
is a true outlier.
5. More generally, there are essentially two types of what-if scenarios used in APA to ultimately
"close" a production accounting period data set to within statistical control limits i.e., declaring
the production accounting period to be in statistical production control. The first is the one
mentioned above whereby a measured/reconciled variable is determined to be in gross-error by
switching it to an unmeasured/regressed variable and checking to see if the objective function
and other measurement and constraint statistics are below their statistical critical values. The
second is making a fixed or rigid variable into an unmeasured or regressed variable. In most
industrial plants found in the process industries (especially in pipe-less plants) there is flexibility
in how materials or resources can be routed, connected or streamed from one piece of
equipment to another (Kelly, 2000). The logging or recording of these movements can also be
erroneous even to the point where they are not logged at all. If the system knows of all of the
possible routes, lineups or external streams (out-port to in-port) then it is prudent to change a
suspect route from being fixed or rigid i.e., not open, active or setup with a tolerance or variance
of zero (0), to being unmeasured or regressed with an unknown value and a variance of infinity.
If a scenario with one of the routes changed from fixed to unmeasured results in a significant
reduction in the objective function, then this is potentially a mis-logged or mis-specified
connection and should be investigated further (Kelly, 1999).
In conclusion, the primary benefit of APA is to statistically scrutinize the production accounting
data on a regular and timely basis to quickly and accurately highlight anomalies in the flowsheet
where possible defects exist. When gross-errors are detected and identified it is then prudent to
eliminate these faults by re-calibrating instruments, improving the logging or recording of
manually entered transactional data such as temporary stream flows, updating or refreshing
auxiliary data sources more frequently, etc. (Kelly, 2000). If for example advanced planning and
scheduling (APS) decisions are made using bad or poor quality production data then of course
these decisions are unfortunately suspect and can significantly and negatively impact the
performance and profitability of your production-chain (Kelly and Zyngier, 2008b).
Finally, Appendix A and B show the APA-IMF.UPS and APA-IMF.IML files used to configure
both the model and the data of the APA problem. The UPS file contains the UOPSS constructs
or shapes and the IML file contains all of the static and dynamic QLQP capacity data referenced
by the UOPSS constructs. The UPS file can be automatically created using the open-source
drawing software called GNOME Dia and using the Python 2.3 programming language to
access Dia's object model to retrieve the UOPSS sheet shapes. The IML file is a simple text file
with several categories or classifications of both the model (master, static) data and the cycle
(transactional, dynamic) data. An interesting feature of the IML file are the use of "Calc"'s
(values assigned to symbols) which can be used to manage dynamic data from the field such as
flow meter readings and laboratory analysis results. This means that interfacing or binding the
various data sources to the IML file is achieved by changing the value of a Calc and then using
this Calc in the rest of the data categories of the IML file. Another interesting feature is the use
of a "missing-value" or "missing-data" number we call a "non-naturally occurring number"
(NNON) typically set to -99999. This is useful to switch a measurement from being measured to
unmeasured i.e., if the value is NNON then it is to be regressed in the DRR, when performing
the gross-error detection and identification analysis similar to running multiple scenarios, cases
or situations to determine if the problem contains bad data before the production accounting
data is disseminated to other decision-making applications.
References
Kelly, J.D., "A regularization approach to the reconciliation of constrained data sets", Computers
& Chemical Engineering, 1771, (1998).
6. Kelly, J.D., "Practical issues in the mass reconciliation of large plant-wide flowsheets", AIChE
Spring Meeting, Houston, March, (1999).
Kelly, J.D., “The necessity of data reconciliation”, NPRA Computer Conference, Chicago,
November, (2000).
Kelly, J.D., "Production modeling for multimodal operations", Chemical Engineering Progress,
February, 44, (2004a).
Kelly, J.D., "Techniques for solving industrial nonlinear data reconciliation problems",
Computers & Chemical Engineering, 2837, (2004b).
Kelly, J.D., Mann, J.L., "Improve yield accounting by including density measurements explicitly",
Hydrocarbon Processing, January, (2005).
Kelly, J.D., Mann, J.L., Schulz, F.G., "Improve accuracy of tracing production qualities using
successive reconciliation", Hydrocarbon Processing, April, (2005).
Kelly, J.D., "The unit-operation-stock superstructure (UOSS) and the quantity-logic-quality
paradigm (QLQP) for production scheduling in the process industries", In: MISTA 2005
Conference Proceedings, 327, (2005).
Kelly, J.D., Zyngier, D., "A new and improved MILP formulation to optimize observability,
redundancy and precision for sensor network problems", American Institute of Chemical
Engineering Journal, 54, 1282, (2008a).
Kelly, J.D., Zyngier, D., "Continuously improve planning and scheduling models with parameter
feedback", FOCAPO 2008, July, (2008b).
Zyngier, D., Kelly, J.D., "UOPSS: a new paradigm for modeling production planning and
scheduling systems", ESCAPE 22, June, (2012).
Kelly, J.D., Hedengren, J.D., "A steady-state detection (SDD) algorithm to detect non-stationary
drifts in processes", Journal of Process Control, 23, 326, (2013).
Appendix A - APA-IMF.UPS (UOPSS) File
Appendix B - APA-IMF.IML File