Presented in this short document is a description of how to estimate deterministic and stochastic non-parametric finite impulse response (FIR) models in IMPL applied to industrial gas furnace data identical to that found in TSE-GFD-IMF using parametric transfer-functions. The methodology of time-series analysis or system identification involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment/adjustment, auto-, cross- and partial-correlation, etc. to establish the parametric transfer function polynomial degrees especially when we are using non-parametric FIR estimation. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s Sequential Equality-Constrained QP Engine (SECQPE) and Supplemental Observability, Redundancy and Variability Estimator (SORVE). Other types of non-parametric identification known as Subspace Identification (Qin, 2006) and can used to estimate state-space models.
Time Series Estimation of Gas Furnace Data in IMPL and CPLEX Industrial Model...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate a deterministic and stochastic time-series transfer function models in IMPL using IBM’s CPLEX applied to industrial gas furnace data. The methodology of time-series analysis involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment, auto-, cross- and partial-correlation, etc. to establish the transfer function polynomial degrees. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s nonlinear programming algorithm SLPQPE which uses CPLEX 12.6 as the QP sub-solver.
Presented in this short document is a description of what is called Advanced Process Monitoring (APM) as described by Hedengren (2013). APM is the term given to the technique of estimating unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment and is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid).
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...Alkis Vazacopoulos
Presented in this short document is a description of how to model and solve advanced parameter estimation (APE) problems in IMPL. APE is the term given to the application of estimating, fitting or calibrating parameters in models involving a network, topology, superstructure or flowsheet. When estimating parameters with multiple linear regression (MLR), ordinary least squares (OLS), ridge regression (RR), principal component regression (PCR) and partial least squares (PLS) there is no explicit model but simply an X-block and Y-block of data. Hence, these methods are referred to as “non-parametric” or “data-based” methods as opposed to the “parametric” or “model-based” method used here. To solve these types of problems we use what is commonly referred to as “error-in-variables” (EIV) regression which is conveniently implemented as nonlinear data reconciliation and regression (NDRR) using the technology found in Kelly (1998a; 1998b; 1999) and Kelly and Zyngier (2008a). The primary benefit of using EIV (NDRR) over the other regression methods is that we can easily handle the inclusion of conservation laws and constitutive relations, explicitly, a must for any industrial estimation problem (IEP).
Presented in this short document is a description of what we call "Advanced" Property Tracking or Tracing (APT). APT is the term given to the technique of predicting, simulating, calculating or estimating the properties (i.e., densities, compositions, conditions, qualities, etc.) in a network or superstructure with significant inventory using statistical data reconciliation and regression (DRR)
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of modeling and solving partial differential equations (PDE’s) in both the temporal and spatial dimensions using IMPL. The sample PDE problem is taken from Cutlip and Shacham (1999 and 2014) and models the process of unsteady-state heat transfer or conduction in a one dimensional (1D) slab with one face insulated and constant thermal conductivity as discussed by Geankoplis (1993).
Presented in this short document is a description of what is called the (classic) “Pooling Optimization Problem” and was first described in Haverly (1978) where he modeled a small distillate blending problem with three component materials (A, B, C), one pool for mixing or blending of only two components, two products (P1, P2) and one property (sulfur, S) as well as only one time-period. The GAMS file of this exact same problem is found in Appendix A which describes all of the sets, lists, parameters, variables and constraints required to represent this problem. Related types of NLP sub-models can also be found in Kelly and Zyngier (2015) where they formulate other sub-types of continuous-processes such as blenders, splitters, separators, reactors, fractionators and black-boxes for adhoc or custom sub-models.
Time Series Estimation of Gas Furnace Data in IMPL and CPLEX Industrial Model...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate a deterministic and stochastic time-series transfer function models in IMPL using IBM’s CPLEX applied to industrial gas furnace data. The methodology of time-series analysis involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment, auto-, cross- and partial-correlation, etc. to establish the transfer function polynomial degrees. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s nonlinear programming algorithm SLPQPE which uses CPLEX 12.6 as the QP sub-solver.
Presented in this short document is a description of what is called Advanced Process Monitoring (APM) as described by Hedengren (2013). APM is the term given to the technique of estimating unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment and is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid).
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...Alkis Vazacopoulos
Presented in this short document is a description of how to model and solve advanced parameter estimation (APE) problems in IMPL. APE is the term given to the application of estimating, fitting or calibrating parameters in models involving a network, topology, superstructure or flowsheet. When estimating parameters with multiple linear regression (MLR), ordinary least squares (OLS), ridge regression (RR), principal component regression (PCR) and partial least squares (PLS) there is no explicit model but simply an X-block and Y-block of data. Hence, these methods are referred to as “non-parametric” or “data-based” methods as opposed to the “parametric” or “model-based” method used here. To solve these types of problems we use what is commonly referred to as “error-in-variables” (EIV) regression which is conveniently implemented as nonlinear data reconciliation and regression (NDRR) using the technology found in Kelly (1998a; 1998b; 1999) and Kelly and Zyngier (2008a). The primary benefit of using EIV (NDRR) over the other regression methods is that we can easily handle the inclusion of conservation laws and constitutive relations, explicitly, a must for any industrial estimation problem (IEP).
Presented in this short document is a description of what we call "Advanced" Property Tracking or Tracing (APT). APT is the term given to the technique of predicting, simulating, calculating or estimating the properties (i.e., densities, compositions, conditions, qualities, etc.) in a network or superstructure with significant inventory using statistical data reconciliation and regression (DRR)
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of modeling and solving partial differential equations (PDE’s) in both the temporal and spatial dimensions using IMPL. The sample PDE problem is taken from Cutlip and Shacham (1999 and 2014) and models the process of unsteady-state heat transfer or conduction in a one dimensional (1D) slab with one face insulated and constant thermal conductivity as discussed by Geankoplis (1993).
Presented in this short document is a description of what is called the (classic) “Pooling Optimization Problem” and was first described in Haverly (1978) where he modeled a small distillate blending problem with three component materials (A, B, C), one pool for mixing or blending of only two components, two products (P1, P2) and one property (sulfur, S) as well as only one time-period. The GAMS file of this exact same problem is found in Appendix A which describes all of the sets, lists, parameters, variables and constraints required to represent this problem. Related types of NLP sub-models can also be found in Kelly and Zyngier (2015) where they formulate other sub-types of continuous-processes such as blenders, splitters, separators, reactors, fractionators and black-boxes for adhoc or custom sub-models.
The Haar-Recursive Transform and Its Consequence to the Walsh-Paley Spectrum ...IJERA Editor
The Walsh and Haar spectral transforms play a crucial part in the analysis, design, and testing of digital devices. They are most suitable for analysis and synthesis of switching or Boolean functions (BFs). It is well known that, the connection between the two spectral domains is given in terms of the Walsh-Paley transform. This paper derives an alternative expression of the Walsh-Paley transform in terms of the Haar transform. The work demonstrates the possibility of obtaining both the Haar spectrum and the Walsh-Paley spectrum using only the Haar transform domain. The paper introduces a new Haar-based transform algorithm (Haar-Paley-Recursive Transform, HPRT) in the form of a recursive function along with its fast transform version. The new algorithm is then explored in its interpretation of the Walsh-Paley transform and its connection to the Autocorrelation function (ACF) of a BF. The connection is given analogously in terms of the Haar-Paley power spectrum via the Wiener-Khintchine theorem. The paper then presents the simulation results on the execution times of both derived algorithms in comparison to the existing Walsh benchmark. The work shows the advantages of using the Haar transform domain in computing the Walsh-Paley spectrum and in effect the ACF.
The following resources come from the 2009/10 BEng in Digital Systems and Computer Engineering (course number 2ELE0065) from the University of Hertfordshire. All the mini projects are designed as level two modules of the undergraduate programmes.
The objectives of this module are to demonstrate, within an embedded development environment:
• Processor – to – processor communication
• Multiple processors to perform one computation task using parallel processing
This project requires the establishment of a communication protocol between two 68000-based microcomputer systems. Using ‘C’, students will write software to control all aspects of complex data transfer system, demonstrating knowledge of handshaking, transmission protocols, transmission overhead, bandwidth, memory addressing. Students will then demonstrate and analyse parallel processing of a mathematical problem using two processors. This project requires two students working as a team.
It Defines what is Programmable Logic Array(PLA) also explains it in easy wording with syntax and Example...
It also cover what is Combinational & Sequential Logic Circuit and the Difference b/w these both. :)
An Introduction to the SOLID PrinciplesAttila Bertók
SOLID Principles are the most important principles of writing maintainable, easy-to-read, easy-to-write clean code. This presentation attempts to give a basic overview of these principles with some examples of violations and ways to correct them.
The Haar-Recursive Transform and Its Consequence to the Walsh-Paley Spectrum ...IJERA Editor
The Walsh and Haar spectral transforms play a crucial part in the analysis, design, and testing of digital devices. They are most suitable for analysis and synthesis of switching or Boolean functions (BFs). It is well known that, the connection between the two spectral domains is given in terms of the Walsh-Paley transform. This paper derives an alternative expression of the Walsh-Paley transform in terms of the Haar transform. The work demonstrates the possibility of obtaining both the Haar spectrum and the Walsh-Paley spectrum using only the Haar transform domain. The paper introduces a new Haar-based transform algorithm (Haar-Paley-Recursive Transform, HPRT) in the form of a recursive function along with its fast transform version. The new algorithm is then explored in its interpretation of the Walsh-Paley transform and its connection to the Autocorrelation function (ACF) of a BF. The connection is given analogously in terms of the Haar-Paley power spectrum via the Wiener-Khintchine theorem. The paper then presents the simulation results on the execution times of both derived algorithms in comparison to the existing Walsh benchmark. The work shows the advantages of using the Haar transform domain in computing the Walsh-Paley spectrum and in effect the ACF.
The following resources come from the 2009/10 BEng in Digital Systems and Computer Engineering (course number 2ELE0065) from the University of Hertfordshire. All the mini projects are designed as level two modules of the undergraduate programmes.
The objectives of this module are to demonstrate, within an embedded development environment:
• Processor – to – processor communication
• Multiple processors to perform one computation task using parallel processing
This project requires the establishment of a communication protocol between two 68000-based microcomputer systems. Using ‘C’, students will write software to control all aspects of complex data transfer system, demonstrating knowledge of handshaking, transmission protocols, transmission overhead, bandwidth, memory addressing. Students will then demonstrate and analyse parallel processing of a mathematical problem using two processors. This project requires two students working as a team.
It Defines what is Programmable Logic Array(PLA) also explains it in easy wording with syntax and Example...
It also cover what is Combinational & Sequential Logic Circuit and the Difference b/w these both. :)
An Introduction to the SOLID PrinciplesAttila Bertók
SOLID Principles are the most important principles of writing maintainable, easy-to-read, easy-to-write clean code. This presentation attempts to give a basic overview of these principles with some examples of violations and ways to correct them.
The installation of IMPL is a straightforward procedure and requires the following prerequisites: two redistributable components from Microsoft and Intel , two open-source applications called Dia and Matplotlib (with NumPy) as well as two versions of the freely useable and distributable Python programming language where it is important to install each component in the order that they are found in this manual. Note that Dia, Matplotlib and Python are only required to create a model’s flowsheet graphically and to view a solution’s data in a Gantt chart with trend plots. If IMPL only is required then only Microsoft and Intel redistributable packages must be installed.
It is also possible to install the free Notepad++ to help configure IML files with syntax highlighting. In addition, the free Visual Studio 2010 C++ Express may also be installed to write C or C++ programs calling IPL similar to the IMPL console program as well as to increase the stack-size of Microsoft Excel to call IPL from VBA.
After the prerequisites have been installed, the installation of IMPL itself is simply a matter of extracting the files from the IMPL.zip file into directory such as C:\IMPL. To run IMPL from a DOS command window prompt or console, type inside the C:\IMPL directory the following:
impl –feed=IMLfile –filter=logistics|quality –fork=coinmp|glpk|lpsolve|ipopt|slpqpe_
where IMLfile is your *.iml filename without the IML extension and select either coinmp, glpk or lpsolve as the MILP solver with logistics and ipopt, slpqpe_coinmp, slpqpe_glpk or slpqpe_lpsolve with quality as the NLP solver.
Advanced Process Monitoring for Startups, Shutdowns & Switchovers Industrial ...Alkis Vazacopoulos
Presented in this short document is a description of what is called “Advanced” Process Monitoring as described by Hedengren (2013) but related to Startups, Shutdowns and Switchovers-to-Others (APM-SUSDSO). APM is the term given to the technique of estimating or fitting unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment. It is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996) in Advanced Process Control (APC) which goes beyond simply updating a bias to implement some form of measurement or parameter feedback (Kelly and Zyngier, 2008b). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid) (Pantelides and Renfro, 2012).
Presented in this short document is a description of what we call "Phasing" and "Planuling". Phasing is a variation of the sequence-dependent changeover problem (Kelly and Zyngier, 2007, Balas et. al., 2008) except that the sequencing, cycling or phasing is fixed as opposed to being variable or free. Planuling is a portmanteau of planning and scheduling where we "schedule" slow processes and we "plan" fast processes together inside the same time-horizon and can also be considered as "hybrid" planning and scheduling.
Presented in this short document is a description of what is well-known as Advanced Process Control (APC) applied to a small linear three (3) manipulated variable (MV) by two (2) controlled variable (CV) problem. These problems are also known as Model Predictive Control (MPC) (Grimm et. al., 1989) and Moving Horizon Control (MHC). Figure 1 shows the 3 x 2 APC problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012) as an Advanced Planning and Scheduling (APS) problem as opposed to a traditional APC problem.
Although there is a tremendous amount of stability, performance and robustness theory associated with APC which can be directly assumed to APS problems (Mastragostino et. al., 2014), our approach is to show that APC can equally be set into an APS framework except that APS has far less sensitivity technology due to its inherent discrete and nonlinear modeling complexities i.e., especially non-convexities. In order to eliminate the steady-state offset between the actual value and its target, it is well-known to apply bias-updating though other forms of “parameter-feedback” is possible. Typically, APS applications only employ “variable-feedback” i.e., opening or initial inventories, properties, etc. but this alone will not alleviate the steady-state offset as demonstrated by Kelly and Zyngier (2008).
The IML file is our user readable import or input file to the IMPL modeling and solving platform. IMPL is an acronym for Industrial Modeling and Programming Language provided by Industrial Algorithms LLC. The IML file allows the user to configure the necessary data to model and solve large-scale and complex industrial optimization problems (IOP's) such as planning, scheduling, control and data reconciliation and regression in either off or on-line environments.
The data configurable in the IML file are broken-down into several categories or classes where these data categories are used as further sections in this basic reference manual. This reference manual is specific only to the quantity dimension of what we refer to as the Quantity-Logic-Quality Phenomena (QLQP). The QLQP provides a useful phenomenological break-down of the problem complexity where the quantity dimension details quantities such as flows, rates, holdups and yields where the quantities can be related to any stock or signal including time. The other two dimensions are not the focus of this documentation but for completeness of the description, logic data have setups, startups, switchovers-to-itself, shutdowns and switchover-to-others (sequence-dependent transitions) and quality data have densities, components, properties and conditions. In addition to the QLQP , we also have what we call the Unit-Operation-Port-State Superstructure (UOPSS). This provides the flowsheet or topology of the IOP in terms of the various shapes, constructs or objects necessary to configure it. The UOPSS is more than a single network given that it is comprised of two networks we call the "physical" network and the "procedural" network. The physical network involves the units and ports (equipment, structural) and the procedural network involves the operations and states (activities, functional). The combination or cross-product of the two derives the "projectional" superstructure and it is these superstructure constructs or UOPSS keys that we apply, attach or associate specific QLQP attributes where projections are also known as hypothetical, logical or virtual constructs. Ultimately, when we augment the superstructure with the time or temporal dimension as well as including multiple sites or echelons i.e., sub-superstructures, we essentially are configuring what is known as a "hyperstructure".
A study of the Behavior of Floating-Point Errorsijpla
The dangers of programs performing floating-point computations are well known. This is due to numerical reliability issues resulting from rounding errors arising during the computations. In general, these round-off errors are neglected because they are small. However, they can be accumulated and propagated and lead to faulty execution and failures. Typically, in critical embedded systems scenario, these faults may cause dramatic damages (eg. failures of Ariane 5 launch and Patriot Rocket mission). The ufp (unit in the first place) and ulp (unit in the last place) functions are used to estimate maximum value of round-off errors. In this paper, the idea consists in studying the behavior of round-off errors, checking their numerical stability using a set of constraints and ensuring that the computation results of round-off errors do not become larger when solving constraints about the ufp and ulp values.
The IML file is our user readable import or input file to the IMPL modeling and solving platform. IMPL is an acronym for Industrial Modeling and Programming Language provided by Industrial Algorithms LLC. The IML file allows the user to configure the necessary data to model and solve large-scale and complex industrial optimization problems (IOP's) such as planning, scheduling, control and data reconciliation and regression in either off or on-line environments.
Please see our IML “(Basic) Reference Manual for Quantities” for a complete introduction on the basics of IML. This manual describes the configuration data necessary to model and solve IOP’s with logic and logistics (quantity and logic) variables and constraints i.e., setups, startups, shutdowns, switchovers, sequence-dependent switchovers, etc.
The symbol "&" denotes an address, index, pointer or key, the "@" denotes an attribute, property, characteristic or value and the prefix "s" stands for string of which there are two other prefixes "r" and "i" for reals (double precision) and integers respectively. String addresses and attributes are case sensitive and do not require any quotes where essentially any character is allowed including spaces except for ",". Each address string field may have no more than 64 characters for it to be considered as unique and each attribute string field may have no more than 512 characters.
Presented in this short document is a description of what is called a “Pipeline Scheduling Optimization Problem” and was first described in Rejowski and Pinto (2003) where they modeled the first-in-first-out (FIFO) and multi-product nature of the segregated pipeline using both discretized space (multi-batches, packs or pipes) and time (multi-intervals, slots or periods). The same MILP model can also be found in Zyngier and Kelly (2009) along with other related production/process objects.
Presented in this short document is a description of our three separate techniques to analyze the data by checking, clustering and componentizing it before it is used by other IMPL’s routines especially in on-line/real-time decision-making applications. We also have other data consistency or analysis techniques which have been described in other IMPL documents and these relate to the application of data reconciliation and regression with diagnostics but require an explicit model (model-based) whereas the techniques below do not i.e., they are data-based techniques.
Gene's law, Common gate, kernel Principal Component Analysis, ASIC Physical Design Post-Layout Verification, TSMC180nm, 0.13um IBM CMOS technology, Cadence Virtuoso, FPAA, in Spanish, Bruun E,
The aim of this paper is to prove that fuzzy logic algorithm is a suitable control technique for fast processes such as electrical machines. This theory has been experimented on different kinds of electrical machines such as stepping motors, dc motors and induction machines (with 6 phases) and the experimental results show that the proposed fuzzy logic algorithm is the most suitable control technique for electrical machines since this algorithm is not time consuming and it is also robust between plant parameters variations.
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cutTELKOMNIKA JOURNAL
With qualified data or information a better decision can be made. The interval width of forecasting
is one of data values to assist in the selection decision making process in regards to crime prevention.
However, in time series forecasting, especially the use of ARIMA model, the amount of historical data
available can affect forecasting result including interval width forecasting value. This study proposes a
combination technique, in order to get get a better interval width crime forecasting value. The propose
combination technique between ARIMA model and Fuzzy Alpha Cut are presented. The use of variation
alpha values are used, they are 0.3, 0.5, and 0.7. The experimental results have shown the use of
ARIMA-FAC with alpha=0.5 is appropriate. The overall results obtained have shown the interval width
crime forecasting with ARIMA-FAC is better than interval width crime forecasting with 95% CI
ARIMA model.
Speed Control of DC Motor using PID FUZZY Controller.Binod kafle
speed control of separately excited dc motor using fuzzy PID controller(FLC).In this research, speed of separately excited DC motor is controlled at 1500 RPM using two approaches i.e. PSO PID and fuzzy logic based PID controller. A mathematical model of system is needed for PSO PID while knowledge based rules obtained via experiment required for fuzzy PID controller . The conventional PID controller parameters are obtained using PSO optimization technique. The simulation is performed using the in-built toolbox from MATLAB and output response are analyzed. The tuning of fuzzy PID uses simple approach based on the rules proposed and membership function of the fuzzy variables. Design specification of fuzzy logic controller (FLC) requires fuzzification, rule list and defuzzification process. The FLC has two input and three output. Inputs are the speed error and rate of change in speed error. The corresponding outputs are Kp, Ki and Kd. There are 25 fuzzy based rule list. FLC uses mamdani system which employs fuzzy sets in consequent part. The obtained result is compared on the basis of rise time, peak time, settling time, overshoot and steady state error. PSO PID controller has fast response but slightly greater overshoot whereas fuzzy PID controller has sluggish response but low overshoot. The selection can be done on the basis of system properties and working environment conditions. PSO PID can be used where the response desired is fast like robotics where as fuzzy PID can be used where desired operation is smooth like industries.
Similar to Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Modeling Framework (FIRE-GFD-IMF) (20)
We tested ODH|CPLEX 4.24 on Miplib Open-v7 Models, a public collection of 286 models to which and optimal solution has not been proven. 257 of these are known to have a feasible solution.
ODH|CPLEX proved optimality on 6 models and found better solutions in 2 hours, to 40% of the models with 12 threads and 35% with 8 threads. ODH|CPLEX matched on 21% of the models.
EX Optimization Studio* solves large-scale optimization problems and enables better business decisions and resulting financial benefits in areas such as supply chain management, operations, healthcare, retail, transportation, logistics and asset management. It has been applied in sectors as diverse as manufacturing, processing, distribution, retailing, transport, finance and investment. CPLEX Optimization Studio is an analytical decision support toolkit for rapid development and deployment of optimization models using mathematical and constraint programming. It combines an integrated development environment (IDE) with the powerful Optimization Programming Language (OPL) and high-performance ILOG CPLEX optimizer solvers. CPLEX Optimization Studio enables clients to: Optimize business decisions with high-performance optimization engines. Develop and deploy optimization models quickly by using flexible interfaces and prebuilt deployment scenarios. Create real-world applications that can significantly improve business outcomes. Optimization Direct has partnered with and entered into a technology licensing and distribution agreement with IBM. By combining the founders' industry and software experience and IBM’s CPLEX Optimization Studio product with the arsenal of Optimization modeling and solving tools from IBM provides customers the most powerful capabilities in the industry.
Missing-Value Handling in Dynamic Model Estimation using IMPL Alkis Vazacopoulos
Presented in this short document is a description of how IMPL handles missing-values or missing-data when estimating dynamic models which inherently involve time-lagged or time-shifted input and output variables. Missing-values in a data set imply that for some reason the data is not available most likely due to a mal-functioning instrument or even lack of proper accounting. Missing-data handling is relatively well-studied especially for time-series or dynamic data given that it is not as easy as removing, ignoring or deleting bad sections of data when static or steady-state models are calibrated (Honaker and King, 2010; Smits and Baggelaar, 2010; Fisher and Waclawski, 2015). Unfortunately, all of their methods involve what is known as “imputation” i.e., replacing or substituting missing-data with some reasonably assumed value which is at the very least is a biased estimate. When regression techniques such as PLS and PCR are used (Nelson et. al., 2006) then missing-data can be handled without imputation by computing the input-output covariance matrices excluding the contribution from the missing-values given the temporal and structural redundancy in the system. However, it is shown in Dayal (1996) that using PLS and other types of regression techniques such as Canonical Correlation Regression (CCR) and Reduced Rank Regression (RRR) to fit non-parsimonious and non-parametric finite impulse/step response models (FIR/FSR), that this is not as reliable as fitting lower-ordered transfer functions especially considering the robust stability of the resulting model predictive controller if that is its intended use.
Our Industrial Modeling Service (IMS) involves several important (but rarely implemented) methods to significantly improve and advance your existing models and data. Since it is well-known that good decision-making requires good models and data, IMS is ideally suited to support this continuous-improvement endeavour. IMS is specifically designed to either co-exist with your existing design, planning, scheduling, etc. applications or these same models and data can be used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create new value-added applications. The following techniques form the basis of our IMS offering.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
Presented in this short document is a description of how to model and solve multi-utility scheduling optimization (MUSO) problems in IMPL. Multi-utility systems (co/tri-generation) are typically found in petroleum refineries and petrochemical plants (multi-commodity systems) especially when fuel-gas (i.e., off-gases of methane and ethane) is a co- or by-product of the production from which multi-pressure heating-, motive- and process-steam are generated on-site. Other utilities include hydrogen, electricity, water, cooling media, air, nitrogen, chemicals, etc. where a multi-utility system is shown in Figure 1 with an intermediate or integrated utility (both produced and consumed) such as fuel-gas, steam or electricity. Itemized benefit areas just for better management of an integrated steam network can be found in Pelham (2013) where his sample multi-pressure steam utility flowsheet is found in Figure 2.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Modeling Framework (FIRE-GFD-IMF)
1. Finite Impulse Response Estimation of Gas Furnace Data in IMPL
Industrial Modeling Framework (FIRE-GFD-IMF)
i n d u s t r IAL g o r i t h m s LLC. (IAL)
www.industrialgorithms.com
December 2014
Introduction to Finite Impulse Response Estimation, UOPSS and QLQP
Presented in this short document is a description of how to estimate deterministic and stochastic
non-parametric finite impulse response (FIR) models in IMPL applied to industrial gas furnace
data identical to that found in TSE-GFD-IMF using parametric transfer-functions. The
methodology of time-series analysis or system identification involves essentially three (3) stages
(Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and
(3) model checking and diagnostics. We do not address (1) which requires stationarity and
seasonality assessment/adjustment, auto-, cross- and partial-correlation, etc. to establish the
parametric transfer function polynomial degrees especially when we are using non-parametric
FIR estimation. Instead we focus only on the parameter estimation and diagnostics. These
types of parameter estimation problems involve dynamic and nonlinear relationships shown
below and we solve these using IMPL’s Sequential Equality-Constrained QP Engine (SECQPE)
and Supplemental Observability, Redundancy and Variability Estimator (SORVE). Other types
of non-parametric identification known as Subspace Identification (Qin, 2006) and can used to
estimate state-space models.
Figure 1 shows the gas furnace data example found in Series J of Box and Jenkins (1976)
where we depict the problem using signal processing constructs configured in our unit-
operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Figure 1. Gas Furnace Data in a “Signal Processing” Based UOPSS Flowsheet.
The diamond shapes or objects are the sources and sinks known as perimeters where U1 and
Y are the input gas flowrate and CO2% in the effluent flue gas of the furnace respectively. The
rectangular shapes with the cross-hairs are continuous-process units where the
DeterministicModel (process) and StochasticModel (noise) are blackbox subtypes which allow
any number of process or operating conditions (and coefficients) to be attached with ad hoc
formulas or equations representing the non-parametric FIR. The Splicer is a signal processing
shape to add and/or subtract the inlet signals together producing a single output i.e., Z = Y – X1
2. in our case. The circles with and without cross-hairs are outlet and inlet port-states respectively.
Port-states are unambiguous interfaces between up and downstream unit-operations.
The deterministic FIR model in discrete-time or difference form (versus z-transform or
backwards-shift operator-based quotients of rational polynomials) is defined as follows:
X1,0 = G0*U1,0+G1*U1,1+G2*U1,2+G3*U1,3+G4*U1,4+G5*U1,5+G6*U1,6+G7*U1,7+G8*U1,8+G9*U1,9+G10*U1,10
where U1 is the exogenous input signal minus its mean of -0.057 at time-periods t-1, t-2, …, t-
10 and X1is the “deterministic state” at time-period t-0. The corresponding parameters or
coefficients are G0, G1, …, G10 are the FIR values for each time-shift or lag in the past where
G0 is typically set to zero (0.0) given discrete-time sampling and some of the initial coefficients
G1, etc. will also be effectively zero (0.0) given the dead-time or inherent delay in the system.
The static or steady-state gain of each input with respect to each output can be easily calculated
from the dynamic gains or FIR’s by taking their sum. During the initial part of the estimation
procedure, the number of FIR’s is typically set to some number greater than expected. Then,
using the parameter variances and the Student-t statistics (parameter confidence-intervals) the
actual number is reduced to hopefully avoid over-parameterization which is a well-known
disadvantage of FIR’s and non-parametric models in general.
Similarly, the stochastic FIR model or unmeasured noise disturbance model is also defined as
follows (Schoukens et. al. 2011):
A,0 = H0*Z,0+H1*Z,1+H2*Z,2+H3*Z,3+H4*Z,4+H5*Z,5+H6*Z,6+H7*Z,7+H8*Z,8+H9*Z,9+H10*Z,10
where A is the assumed white-noise input signal and Z is the “stochastic state” which is equal to
Z = Y – X1 and the time-series Y is also minus its mean of 53.51. The parameters H0, H1, …,
H10 and actually represent the “inverse” of stochastic noise model in non-parametric form. The
noise model is essentially a time- or frequency-dependent weighting filter and is very important
to ensure that the A-series is independent and identically distributed (MacGregor and Fogal,
1995; Shreesha and Gudi, 2004; Schoukens et. al. 2011) which is squared, summed and
minimized in the objective function of the prediction error or nonlinear least-squares regression.
Without this noise filter, the estimation is well-understood to be biased yielding inaccurate FIR
coefficients but unfortunately makes the estimator nonlinear since H and Z are variables.
From a quantity-logic-quality phenomena (QLQP) perspective, the time-series U1, Y, Z and A
found in Figure 1 are considered as flows or more appropriately signal-flows or data. However,
in our IML implementation found in Appendix A we collapse the three (3) continuous-processes
into one blackbox model as shown by the dotted-line box in Figure 1 where the flows of U1, Y, Z
and A are now considered as conditions and the G and H parameters are static coefficients in
the IMPL semantics.
Once the FIR parameters are known then these can be straightforwardly implemented into
advanced process controllers such as found in APC-IMF-Julia. It should also be stressed that
for multiple-input, multiple-output (MIMO) processes, the design of the input time-series plays
an important role in the success of regressing good and useful dynamic representations such as
transfer-function, state-space and FIR models where a novel design of the external excitation or
dither signals can be found in DSDP-CLE-IMF for open- and/or closed-loop identification.
Industrial Modeling Framework (IMF), IMPL and SSIIMPLE
3. To implement the mathematical formulation of this and other systems, IAL offers a unique
approach and is incorporated into our Industrial Modeling Programming Language we call IMPL.
IMPL has its own modeling language called IML (short for Industrial Modeling Language) which
is a flat or text-file interface as well as a set of API's which can be called from any computer
programming language such as C, C++, Fortran, C#, VBA, Java (SWIG), Python (CTYPES)
and/or Julia (CCALL) called IPL (short for Industrial Programming Language) to both build the
model and to view the solution. Models can be a mix of linear, mixed-integer and nonlinear
variables and constraints and are solved using a combination of LP, QP, MILP and NLP solvers
such as COINMP, GLPK, LPSOLVE, SCIP, CPLEX, GUROBI, LINDO, XPRESS, CONOPT,
IPOPT, KNITRO and WORHP as well as our own implementation of SLP called SLPQPE
(Successive Linear & Quadratic Programming Engine) which is a very competitive alternative to
the other nonlinear solvers and embeds all available LP and QP solvers.
In addition and specific to DRR problems, we also have a special solver called SECQPE
standing for Sequential Equality-Constrained QP Engine which computes the least-squares
solution and a post-solver called SORVE standing for Supplemental Observability, Redundancy
and Variability Estimator to estimate the usual DRR statistics. SECQPE also includes a
Levenberg-Marquardt regularization method for nonlinear data regression problems and can be
presolved using SLPQPE i.e., SLPQPE warm-starts SECQPE. SORVE is run after the
SECQPE solver and also computes the well-known "maximum-power" gross-error statistics
(measurement and nodal/constraint tests) to help locate outliers, defects and/or faults i.e., mal-
functions in the measurement system and mis-specifications in the logging system.
The underlying system architecture of IMPL is called SSIIMPLE (we hope literally) which is short
for Server, Solvers, Interfacer (IML), Interacter (IPL), Modeler, Presolver Libraries and
Executable. The Server, Solvers, Presolver and Executable are primarily model or problem-
independent whereas the Interfacer, Interacter and Modeler are typically domain-specific i.e.,
model or problem-dependent. Fortunately, for most industrial planning, scheduling,
optimization, control and monitoring problems found in the process industries, IMPL's standard
Interfacer, Interacter and Modeler are well-suited and comprehensive to model the most difficult
of production and process complexities allowing for the formulations of straightforward
coefficient equations, ubiquitous conservation laws, rigorous constitutive relations, empirical
correlative expressions and other necessary side constraints.
User, custom, adhoc or external constraints can be augmented or appended to IMPL when
necessary in several ways. For MILP or logistics problems we offer user-defined constraints
configurable from the IML file or the IPL code where the variables and constraints are
referenced using unit-operation-port-state names and the quantity-logic variable types. It is also
possible to import a foreign *.ILP file (row-based MPS file) which can be generated by any
algebraic modeling language or matrix generator. This file is read just prior to generating the
matrix and before exporting to the LP, QP or MILP solver. For NLP or quality problems we offer
user-defined formula configuration in the IML file and single-value and multi-value function
blocks writable in C, C++ or Fortran. The nonlinear formulas may include intrinsic functions
such as EXP, LN, LOG, SIN, COS, TAN, MIN, MAX, IF, NOT, EQ, NE, LE, LT, GE, GT and CIP,
LIP, SIP and KIP (constant, linear and monotonic spline interpolations) as well as user-written
extrinsic functions (XFCN). It is also possible to import another type of foreign file called the
*.INL file where both linear and nonlinear constraints can be added easily using new or existing
IMPL variables.
Industrial modeling frameworks or IMF's are intended to provide a jump-start to an industrial
project implementation i.e., a pre-project if you will, whereby pre-configured IML files and/or IPL
4. code are available specific to your problem at hand. The IML files and/or IPL code can be
easily enhanced, extended, customized, modified, etc. to meet the diverse needs of your project
and as it evolves over time and use. IMF's also provide graphical user interface prototypes for
drawing the flowsheet as in Figure 1 and typical Gantt charts and trend plots to view the solution
of quantity, logic and quality time-profiles. Current developments use Python 2.3 and 2.7
integrated with open-source Gnome Dia and Matplotlib modules respectively but other
prototypes embedded within Microsoft Excel/VBA for example can be created in a
straightforward manner.
However, the primary purpose of the IMF's is to provide a timely, cost-effective, manageable
and maintainable deployment of IMPL to formulate and optimize complex industrial
manufacturing systems in either off-line or on-line environments. Using IMPL alone would be
somewhat similar (but not as bad) to learning the syntax and semantics of an AML as well as
having to code all of the necessary mathematical representations of the problem including the
details of digitizing your data into time-points and periods, demarcating past, present and future
time-horizons, defining sets, index-sets, compound-sets to traverse the network or topology,
calculating independent and dependent parameters to be used as coefficients and bounds and
finally creating all of the necessary variables and constraints to model the complex details of
logistics (discrete) and quality (nonlinear) industrial optimization problems. Instead, IMF's and
IMPL provide, in our opinion, a more elegant and structured approach to industrial modeling and
solving so that you can capture the benefits of advanced decision-making faster, better and
cheaper.
Finite Impulse Response Estimation of Gas Furnace Data Synopsis
After iterating using SECQPE several times and setting certain G and H coefficients to zero
(0.0) depending on their reported confidence-intervals from SORVE, which is the typical
protocol especially with non-parametric estimation, their values with two (2) times their
standard-error are shown below:
G0 = 0.0
G1 = 0.0
G2 = 0.0
G3 = -0.534 +/- 0.15
G4 = -0.667 +/- 0.16
G5 = -0.861 +/- 0.16
G6 = -0.496 +/- 0.16
G7 = -0.260 +/- 0.13
G8 = -0.123 +/- 0.10
G9 = 0.0
G10 = 0.0
H0 = 1.0
H1 = -1.522 +/- 0.10
H2 = 0.613 +/- 0.10
H5 = 0.0
H6 = 0.0
H7 = 0.0
H8 = 0.0
H9 = 0.0
H10 = 0.0
The objective function value computed is 16.9 in twelve (12) iterations of SECQPE. The
reported standard-error of the residuals is 16.9/296 = 0.0571 which approximates the standard-
deviation of the (hopefully) white-noise residuals of time-series A. The absolute values for H1
and H2 are almost identical to those found in Box and Jenkins (1976) which is consistent with
the fact that they also used an auto-regressive (AR) noise model. In addition, no significant
5. auto-correlation of the residuals (our time-series A) was detected confirming that the estimation
should be unbiased.
The static gain (i.e., the first-order partial derivative of how U1 affects Y) using the truncated
impulse response G is -2.941 +/- 1.02 which is close to the steady-state gain reported in TSE-
GFD-IMF of (-0.53-0.37-0.51)/(1-0.57-0.01) = -3.357 by setting the backwards shift operator (z^-
1) to unity (1.0) where the dead-times are identical to three (3) time-periods. Although there is
over a 10% difference between the two static gain estimates, this is not uncommon when fitting
its value from passive or happenstance data which may include some form of feedback (closed-
loop interactions) as opposed to a well-designed, open/closed-loop PRBS/GBNS input/dither
signal (DSDP-CLE-IMF).
In summary, we have highlighted the application of finite impulse response estimation (FIRE)
using the industrial gas furnace data (Series J) from Box and Jenkins (1976) for both the
deterministic and stochastic terms. The model was formulated in IMPL and solved successfully
using its SECQPE and SORVE and can also be used to estimate static gains of the system
which would be useful in further steady-state process optimization (active) and/or process
monitoring (passive) applications.
References
Box, G.E.P., Jenkins, G.M., “Time-series analysis: forecasting and control”, revised edition,
Holden Day, Oakland, CA, 389-400 and Series J. (1976).
MacGregor, J.F., and Fogal, D.T., “Closed-loop identification: the role of the noise model and
prefilters”, Journal of Process Control, 5, 163, 171, (1995).
Shreesha, C., Gudi, R.D., “Analysis of pre-filter based closed-loop control-relevant identification
methodologies”, Canadian Journal of Chemical Engineering, 82, (2004).
Kelly, J.D., "Production modeling for multimodal operations", Chemical Engineering Progress,
February, 44, (2004).
Kelly, J.D., "The unit-operation-stock superstructure (UOSS) and the quantity-logic-quality
paradigm (QLQP) for production scheduling in the process industries", In: MISTA 2005
Conference Proceedings, 327, (2005).
Qin, J.S., “An overview of subspace identification”, Computers and Chemical Engineering,
1502-1513, (2006).
Kelly, J.D., Zyngier, D., "A new and improved MILP formulation to optimize observability,
redundancy and precision for sensor network problems", American Institute of Chemical
Engineering Journal, 54, 1282, (2008).
Schoukens, J., Rolain, Y., Vandersteen, G., Pintelon, R., “User friendly Box-Jenkins
identification using nonparametric noise models, 50th
IEEE Conference on Decision and Control
European Control Conference (CDC-ECC), Orlando, Florida, USA, December, (2011).
IAL, “Time series estimation of gas furnace data industrial modeling framework (TSE-GFD-
IMF)”, Slideshare, August, 2014.
6. IAL, “Advanced process control (APC) industrial modeling framework in the Julia programming
language (APC-IMF-Julia)”, Slideshare, October, 2014.
IAL, “Dither signal design problem for closed-loop estimation industrial modeling framework
(DSDP-CLE-IMF)”, Slideshare, December, 2014.
Appendix A – FIRE-GFD-IMF.IML File
i M P l (c)
Copyright and Property of i n d u s t r I A L g o r i t h m s LLC.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Calculation Data (Parameters)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sCalc,@sValue
START,0.0
BEGIN,7.0
END,296.0
PERIOD,1.0
SE,1.0 !0.0571 != 16.9/296
LRGBND,1d+2
gbnd,1d+2
hbnd,1d+2
&sCalc,@sValue
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Chronological Data (Periods)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@rPastTHD,@rFutureTHD,@rTPD
START,END,PERIOD
@rPastTHD,@rFutureTHD,@rTPD
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Constant Data (Parameters)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
&sData,@sValue
u1,-0.052
,0.057
,0.235
,0.396
,0.43
,0.498
,0.518
,0.405
,0.184
,-0.123
,-0.531
,-0.998
,-1.364
,-1.463
,-1.245
,-0.757
,-0.418
,-0.136
,0.145
,0.492
,0.828
,0.923
,0.932
,0.948
,1.044
,1.32
,1.832
,2.033
,1.991
,1.923
,1.889
,1.824
,1.665
,1.322
,0.847
,0.417
,0.172
,0.145
,0.388
,0.702
,1.017
,1.466
,2.727
,2.891
,2.869
,2.54