Our Industrial Modeling Service (IMS) involves several important (but rarely implemented) methods to significantly improve and advance your existing models and data. Since it is well-known that good decision-making requires good models and data, IMS is ideally suited to support this continuous-improvement endeavour. IMS is specifically designed to either co-exist with your existing design, planning, scheduling, etc. applications or these same models and data can be used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create new value-added applications. The following techniques form the basis of our IMS offering.
INVESTIGATING HUMAN-MACHINE INTERFACES’ EFFICIENCY IN INDUSTRIAL MACHINERY AN...IJITCA Journal
The twenty-first century has seen a vast technological revolution characterized by the development of
cyber-physical systems, integration of things, and new and computationally improved machines and
systems. However, there have been seemingly little strides in the development of user interfaces,
specifically for industrial machines and equipment. The aim of this study was to assess the efficiency of the
human-machine interfaces in the Kenyan context in providing a consistent and reliable working
environment for industrial machine operators. The researcher employed a convenient purposive sampling
to select 15 participants who had at least two years of hands-on experience in machines operation, control,
or instrumentation. The results of the study are herein presented, including the recommendations to
enhance workforce productivity and efficiency.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
INVESTIGATING HUMAN-MACHINE INTERFACES’ EFFICIENCY IN INDUSTRIAL MACHINERY AN...IJITCA Journal
The twenty-first century has seen a vast technological revolution characterized by the development of
cyber-physical systems, integration of things, and new and computationally improved machines and
systems. However, there have been seemingly little strides in the development of user interfaces,
specifically for industrial machines and equipment. The aim of this study was to assess the efficiency of the
human-machine interfaces in the Kenyan context in providing a consistent and reliable working
environment for industrial machine operators. The researcher employed a convenient purposive sampling
to select 15 participants who had at least two years of hands-on experience in machines operation, control,
or instrumentation. The results of the study are herein presented, including the recommendations to
enhance workforce productivity and efficiency.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Comparison of PID Controller with Model Predictive Controller for Milk Pasteu...journalBEEI
Proportional–Integral–Derivative (PID) controllers are used in many of the Industries for various process control applications. PID controller yields a long settling time and overshoot which is not good for the process control applications. PID is not suitable for many of the complex process control applications. This research paper is about developing a better type of controller, known as MPC (Model Predictive Control). The aim of the paper is to design MPC and PID for a pasteurization process. In this manuscript comparison of PID controller with MPC is made and the responses are presented. MPC is an advanced control strategy that uses the internal dynamic model of the process and a history of past control moves and a combination of many different technologies to predict the future plant output. The dynamics of the pasteurization process was estimated by using system identification from the experimental data. The quality of different model structures was checked using best fit with data validation, residual and stability analysis. Auto-regressive with exogenous input (ARX322) model was chosen as a model structure of the pasteurization process and fits about 80.37% with datavalidation. MPC and PID control strategies were designed using ARX322 model structure. The controller performance was compared based on settling time, percent of overshoot and stability analysis and the results are presented.
Performance Management of Functional Airspace BlocksPaula R MARK
Do you agree that there exists a need for performance management of air navigation service providers (ANSPs) at the organizational and block level? Read on to see whatinteresting facts we can glean from a gllamm analysis...
Business Process Modeling: An Example of Re-engineering the EnterpriseMassimo Talia
How the Software Engineering and Electrical and Electronic System Engineering walk together. Software Engineering is more related to the software, System Engineering is related to the Physical Systems.
A quick overview of the ImpactECS Cost and Profitability Platform, including a description of the available tools to build accounting and finance models.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate deterministic and stochastic non-parametric finite impulse response (FIR) models in IMPL applied to industrial gas furnace data identical to that found in TSE-GFD-IMF using parametric transfer-functions. The methodology of time-series analysis or system identification involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment/adjustment, auto-, cross- and partial-correlation, etc. to establish the parametric transfer function polynomial degrees especially when we are using non-parametric FIR estimation. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s Sequential Equality-Constrained QP Engine (SECQPE) and Supplemental Observability, Redundancy and Variability Estimator (SORVE). Other types of non-parametric identification known as Subspace Identification (Qin, 2006) and can used to estimate state-space models.
Comparison of PID Controller with Model Predictive Controller for Milk Pasteu...journalBEEI
Proportional–Integral–Derivative (PID) controllers are used in many of the Industries for various process control applications. PID controller yields a long settling time and overshoot which is not good for the process control applications. PID is not suitable for many of the complex process control applications. This research paper is about developing a better type of controller, known as MPC (Model Predictive Control). The aim of the paper is to design MPC and PID for a pasteurization process. In this manuscript comparison of PID controller with MPC is made and the responses are presented. MPC is an advanced control strategy that uses the internal dynamic model of the process and a history of past control moves and a combination of many different technologies to predict the future plant output. The dynamics of the pasteurization process was estimated by using system identification from the experimental data. The quality of different model structures was checked using best fit with data validation, residual and stability analysis. Auto-regressive with exogenous input (ARX322) model was chosen as a model structure of the pasteurization process and fits about 80.37% with datavalidation. MPC and PID control strategies were designed using ARX322 model structure. The controller performance was compared based on settling time, percent of overshoot and stability analysis and the results are presented.
Performance Management of Functional Airspace BlocksPaula R MARK
Do you agree that there exists a need for performance management of air navigation service providers (ANSPs) at the organizational and block level? Read on to see whatinteresting facts we can glean from a gllamm analysis...
Business Process Modeling: An Example of Re-engineering the EnterpriseMassimo Talia
How the Software Engineering and Electrical and Electronic System Engineering walk together. Software Engineering is more related to the software, System Engineering is related to the Physical Systems.
A quick overview of the ImpactECS Cost and Profitability Platform, including a description of the available tools to build accounting and finance models.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate deterministic and stochastic non-parametric finite impulse response (FIR) models in IMPL applied to industrial gas furnace data identical to that found in TSE-GFD-IMF using parametric transfer-functions. The methodology of time-series analysis or system identification involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment/adjustment, auto-, cross- and partial-correlation, etc. to establish the parametric transfer function polynomial degrees especially when we are using non-parametric FIR estimation. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s Sequential Equality-Constrained QP Engine (SECQPE) and Supplemental Observability, Redundancy and Variability Estimator (SORVE). Other types of non-parametric identification known as Subspace Identification (Qin, 2006) and can used to estimate state-space models.
The IMPL console executable (IMPL.exe) can be called from any DOS command prompt window where its Intel Fortran source code can be found in Appendix A. The IMPL console is useful given that it allows you to model and solve problems configured in an IML (Industrial Modeling Language) file. Problems coded using IPL (Industrial Programming Language) in many computer programming languages can use the IMPL console source code as a prototype.
The IMPL console reads several input files and writes several output files which are described in this document. There are several console flags that can be specified as command line arguments and are described below.
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of what is called Advanced Process Monitoring (APM) as described by Hedengren (2013). APM is the term given to the technique of estimating unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment and is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid).
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Software testing effort estimation with cobb douglas function- a practical ap...eSAT Journals
Abstract Effort estimation is one of the critical challenges in Software Testing Life Cycle (STLC). It is the basis for the project’s effort estimation, planning, scheduling and budget planning. This paper illustrates model with an objective to depict the accuracy and bias variation of an organization’s estimates of software testing effort through Cobb-Douglas function (CDF). Data variables selected for building the model were believed to be vital and have significant impact on the accuracy of estimates. Data gathered for the completed projects in the organization for about 13 releases. Statistically, all variables in this model were statistically significant at p<0.05><0.01 levels. The Cobb-Douglas function was selected and used for the software testing effort estimation. The results achieved with CDF were compared with the estimates provided by the area expert. The model’s estimation figures are more accurate than the expert judgment. CDF has one of the appropriate techniques for estimating effort for software testing. CDF model accuracy is 93.42%.
International Journal of Production ResearchVol. 49, No. 20,.docxnormanibarber20063
International Journal of Production Research
Vol. 49, No. 20, 15 October 2011, 5987–5998
A methodology for improving enterprise performance by
analysing worker capabilities via simulation
Brian Kernan*, Andrew Lynch and Con Sheahan
Department of Manufacturing Operations & Engineering, University of Limerick,
Engineering Research Building, Castletroy, Limerick, Ireland
(Received 20 May 2010; final version received 19 September 2010)
In this paper we outline a methodology for improving the overall performance of
small to medium sized enterprises (SMEs) by analysing worker capabilities
through simulation and modelling. We firstly examine key performance indica-
tors (KPIs) of the SME in its as-is state. The primary KPIs we examine are the
resource constraint metrics (RCMs) and customer misery index (CMI). The
RCMs help to identify the skill that is the biggest contributor to the overall
system constrainedness. The CMI is a measure of customer demand satisfaction.
By increasing the supply of the most heavily constrained skill we should increase
the flow of work orders through the system, which will in turn result in a reduced
CMI, or at least provide a potential for more work orders to flow through the
system. We run a set of experiments on data from a real factory, which upgrades
the skill sets of workers with the most heavily constrained skill, and then we look
at the system improvement. The overall impact of this experimental methodology
is that it can make recommendations to an organisation about which worker to
upgrade with which skill, and how the training should be implemented, to yield
the optimal improvement to the enterprise.
Keywords: worker; skill; training; SMEs; resource constraint metric;
customer misery index
1. Introduction
Managing directors of SMEs (small to medium sized enterprises) consistently find
personnel capability a limiting factor in influencing key performance measures.
Consequently worker training is viewed as an important tactical decision scenario for
an organisation. Previous research has shown that the implementation of training
programmes in companies can yield substantial productivity gains (Bartel 1994), and is
important for a company’s growth and its ability to stay competitive (Mital et al. 1999).
As part of our research we carried out a survey on 40 SME owner-managers on whether
or not a computer aid such as simulation would be a useful tool for making a decision
on worker training. The results are shown in Figure 1. Over 80% of participants either
agreed or strongly agreed that a computer tool that could rank candidate workers based
on the improvements they could bring to the enterprise would be useful in making a
decision on worker training.
*Corresponding author. Email: [email protected]
ISSN 0020–7543 print/ISSN 1366–588X online
� 2011 Taylor & Francis
DOI: 10.1080/00207543.2010.527387
http://www.informaworld.com
Discrete event simulation (DES) lends itself well to the training decision s.
Good practice standardizes parameters and metrics across the entire operation to enable meaningful manufacturing decision support and continuous improvement. Frequently manufacturing and business parameters are combined into Key Performance Indicators (KPI) to simplify monitoring more complex functions. One commonly deployed KPI is Overall Equipment Effectiveness (OEE) which combines measures of availability, throughput and quality.
There exists tremendous value potential for companies coupling OEE with SPC, and making it part of manufacturing-decision support. It sets the company on the path to state-of-the-art manufacturing process management by enabling them to:
Apply SPC to automated OEE solutions – looking at single values of a KPI adds little to one’s process management capability, but using control charts and process capability analysis will enable developing world-class manufacturing;
Rapidly determine where improvement opportunities exist;
Focus on information, not data – data is the raw material; information provides the decision support that will improve performance levels.
Proceedings of the 2015 Industrial and Systems Engineering Res.docxwkyra78
Proceedings of the 2015 Industrial and Systems Engineering Research Conference
S. Cetinkaya and J. K. Ryan, eds.
Use of Symbolic Regression for Lean Six Sigma Projects
Daniel Moreno-Sanchez, MSc.
Jacobo Tijerina-Aguilera, MSc.
Universidad de Monterrey
San Pedro Garza Garcia, NL 66238, Mexico
Arlethe Yari Aguilar-Villarreal, MEng.
Universidad Autonoma de Nuevo Leon
San Nicolas de los Garza, NL 66451, Mexico
Abstract
Lean Six Sigma projects and the quality engineering profession have to deal with an extensive selection of tools
most of them requiring specialized training. The increased availability of standard statistical software motivates the
use of advanced data science techniques to identify relationships between potential causes and project metrics. In
these circumstances, Symbolic Regression has received increased attention from researchers and practitioners to
uncover the intrinsic relationships hidden within complex data without requiring specialized training for its
implementation. The objective of this paper is to evaluate the advantages and drawbacks of using computer assisted
Symbolic Regression within the Analyze phase of a Lean Six Sigma project. An application of this approach in a
service industry project is also presented.
Keywords
Symbolic Regression, Data Science, Lean Six Sigma
1. Introduction
Lean Six Sigma (LSS) has become a well-known hybrid methodology for quality and productivity improvement in
organizations. Its wide adoption in several industries has shaped Process Innovation and Operational Excellence
initiatives, enabling LSS to become a main topic in quality practitioner sites of interest [1], recognized Six Sigma
(SS) certification body of knowledge contents [2], and professional society conferences [3].
However LSS projects and the quality engineering profession have to deal with an extensive selection of tools most
of them requiring specialized training. To assist LSS practitioners it is common to categorize tools based on the
traditional DMAIC model which stands for Define, Measure, Analyze, Improve, and Control phases. Table 1
presents an overview of the main tools that are commonly used in each phase of a LSS project, allowing team
members to progressively develop an understanding between realizing each phase’s intent and how the selected
tools can contribute to that purpose.
This paper focuses on the Analyze phase where tools for statistical model building are most likely to be selected.
The increased availability of standard statistical software motivates the use of advanced data science techniques to
identify relationships between potential causes and project metrics. In these circumstances Symbolic Regression
(SR) has received increased attention from researchers and practitioners even though SR is still in an early stage of
commercial availability.
The objective of this paper is to evaluate the advantages and drawbacks o ...
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...Alkis Vazacopoulos
Presented in this short document is a description of how to model and solve advanced parameter estimation (APE) problems in IMPL. APE is the term given to the application of estimating, fitting or calibrating parameters in models involving a network, topology, superstructure or flowsheet. When estimating parameters with multiple linear regression (MLR), ordinary least squares (OLS), ridge regression (RR), principal component regression (PCR) and partial least squares (PLS) there is no explicit model but simply an X-block and Y-block of data. Hence, these methods are referred to as “non-parametric” or “data-based” methods as opposed to the “parametric” or “model-based” method used here. To solve these types of problems we use what is commonly referred to as “error-in-variables” (EIV) regression which is conveniently implemented as nonlinear data reconciliation and regression (NDRR) using the technology found in Kelly (1998a; 1998b; 1999) and Kelly and Zyngier (2008a). The primary benefit of using EIV (NDRR) over the other regression methods is that we can easily handle the inclusion of conservation laws and constitutive relations, explicitly, a must for any industrial estimation problem (IEP).
Missing-Value Handling in Dynamic Model Estimation using IMPL Alkis Vazacopoulos
Presented in this short document is a description of how IMPL handles missing-values or missing-data when estimating dynamic models which inherently involve time-lagged or time-shifted input and output variables. Missing-values in a data set imply that for some reason the data is not available most likely due to a mal-functioning instrument or even lack of proper accounting. Missing-data handling is relatively well-studied especially for time-series or dynamic data given that it is not as easy as removing, ignoring or deleting bad sections of data when static or steady-state models are calibrated (Honaker and King, 2010; Smits and Baggelaar, 2010; Fisher and Waclawski, 2015). Unfortunately, all of their methods involve what is known as “imputation” i.e., replacing or substituting missing-data with some reasonably assumed value which is at the very least is a biased estimate. When regression techniques such as PLS and PCR are used (Nelson et. al., 2006) then missing-data can be handled without imputation by computing the input-output covariance matrices excluding the contribution from the missing-values given the temporal and structural redundancy in the system. However, it is shown in Dayal (1996) that using PLS and other types of regression techniques such as Canonical Correlation Regression (CCR) and Reduced Rank Regression (RRR) to fit non-parsimonious and non-parametric finite impulse/step response models (FIR/FSR), that this is not as reliable as fitting lower-ordered transfer functions especially considering the robust stability of the resulting model predictive controller if that is its intended use.
Unit-Operation Nonlinear Modeling for Planning and Scheduling ApplicationsAlkis Vazacopoulos
The focus of this chapter is to detail the quantity and quality modeling aspects of production flowsheets found in all process industries. Production flowsheets are typically at a higher-level than process flowsheets given that in many cases more direct business or economic related decisions are being made such as maximizing profit and performance for the overall plant and/or for several integrated plants together with shared resources. These decisions are usually planning and scheduling related, often referred to as production control, which require a larger spatial and temporal scope compared to more myopic process flowsheets which detail the steady or unsteady-state material, energy and momentum balances of a particular process unit-operation over a relatively short time horizon. This implies that simpler but still representative mathematical models of the individual processes are necessary in order to solve the multi time-period nonlinear system using nonlinear optimizers such as successive linear programming (SLP) and sequential quadratic programming (SQP). In this chapter we describe six types of unit-operation models which can be used as fundamental building blocks or objects to formulate large production flowsheets. In addition, we articulate the differences between continuous and batch processes while also discussing several other important implementation issues regarding the use of these unit-operation models within a decision-making system. It is useful to also note that the quantity and quality modeling system described in this chapter complements the quantity and logic modeling used to describe production and inventory systems outlined in Zyngier and Kelly (2009).
MLOps Bridging the gap between Data Scientists and Ops.Knoldus Inc.
Through this session we're going to introduce the MLOps lifecycle and discuss the hidden loopholes that can affect the MLProject. Then we are going to discuss the ML Model lifecycle and discuss the problem with training. We're going to introduce the MLFlow Tracking module in order to track the experiments.
We tested ODH|CPLEX 4.24 on Miplib Open-v7 Models, a public collection of 286 models to which and optimal solution has not been proven. 257 of these are known to have a feasible solution.
ODH|CPLEX proved optimality on 6 models and found better solutions in 2 hours, to 40% of the models with 12 threads and 35% with 8 threads. ODH|CPLEX matched on 21% of the models.
EX Optimization Studio* solves large-scale optimization problems and enables better business decisions and resulting financial benefits in areas such as supply chain management, operations, healthcare, retail, transportation, logistics and asset management. It has been applied in sectors as diverse as manufacturing, processing, distribution, retailing, transport, finance and investment. CPLEX Optimization Studio is an analytical decision support toolkit for rapid development and deployment of optimization models using mathematical and constraint programming. It combines an integrated development environment (IDE) with the powerful Optimization Programming Language (OPL) and high-performance ILOG CPLEX optimizer solvers. CPLEX Optimization Studio enables clients to: Optimize business decisions with high-performance optimization engines. Develop and deploy optimization models quickly by using flexible interfaces and prebuilt deployment scenarios. Create real-world applications that can significantly improve business outcomes. Optimization Direct has partnered with and entered into a technology licensing and distribution agreement with IBM. By combining the founders' industry and software experience and IBM’s CPLEX Optimization Studio product with the arsenal of Optimization modeling and solving tools from IBM provides customers the most powerful capabilities in the industry.
Presented in this short document is a description of how to model and solve multi-utility scheduling optimization (MUSO) problems in IMPL. Multi-utility systems (co/tri-generation) are typically found in petroleum refineries and petrochemical plants (multi-commodity systems) especially when fuel-gas (i.e., off-gases of methane and ethane) is a co- or by-product of the production from which multi-pressure heating-, motive- and process-steam are generated on-site. Other utilities include hydrogen, electricity, water, cooling media, air, nitrogen, chemicals, etc. where a multi-utility system is shown in Figure 1 with an intermediate or integrated utility (both produced and consumed) such as fuel-gas, steam or electricity. Itemized benefit areas just for better management of an integrated steam network can be found in Pelham (2013) where his sample multi-pressure steam utility flowsheet is found in Figure 2.
Presented in this short document is a description of modeling and solving partial differential equations (PDE’s) in both the temporal and spatial dimensions using IMPL. The sample PDE problem is taken from Cutlip and Shacham (1999 and 2014) and models the process of unsteady-state heat transfer or conduction in a one dimensional (1D) slab with one face insulated and constant thermal conductivity as discussed by Geankoplis (1993).
Presented in this short document is a description of what is well-known as Advanced Process Control (APC) applied to a small linear three (3) manipulated variable (MV) by two (2) controlled variable (CV) problem. These problems are also known as Model Predictive Control (MPC) (Grimm et. al., 1989) and Moving Horizon Control (MHC). Figure 1 shows the 3 x 2 APC problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012) as an Advanced Planning and Scheduling (APS) problem as opposed to a traditional APC problem.
Although there is a tremendous amount of stability, performance and robustness theory associated with APC which can be directly assumed to APS problems (Mastragostino et. al., 2014), our approach is to show that APC can equally be set into an APS framework except that APS has far less sensitivity technology due to its inherent discrete and nonlinear modeling complexities i.e., especially non-convexities. In order to eliminate the steady-state offset between the actual value and its target, it is well-known to apply bias-updating though other forms of “parameter-feedback” is possible. Typically, APS applications only employ “variable-feedback” i.e., opening or initial inventories, properties, etc. but this alone will not alleviate the steady-state offset as demonstrated by Kelly and Zyngier (2008).
Presented in this short document is a description of our three separate techniques to analyze the data by checking, clustering and componentizing it before it is used by other IMPL’s routines especially in on-line/real-time decision-making applications. We also have other data consistency or analysis techniques which have been described in other IMPL documents and these relate to the application of data reconciliation and regression with diagnostics but require an explicit model (model-based) whereas the techniques below do not i.e., they are data-based techniques.
Time Series Estimation of Gas Furnace Data in IMPL and CPLEX Industrial Model...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate a deterministic and stochastic time-series transfer function models in IMPL using IBM’s CPLEX applied to industrial gas furnace data. The methodology of time-series analysis involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment, auto-, cross- and partial-correlation, etc. to establish the transfer function polynomial degrees. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s nonlinear programming algorithm SLPQPE which uses CPLEX 12.6 as the QP sub-solver.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
1. Industrial Modeling Service (“Smart Modeling”) (IMS-IMPL)
“Better Industrial Models and Data for Better Design, Planning, Scheduling,
Optimization, Control, Monitoring and Accounting Applications”
i n d u s t r IAL g o r i t h m s LLC. (IAL)
www.industrialgorithms.com
December 2014
Our Industrial Modeling Service (IMS) involves several important (but rarely implemented)
methods to significantly improve and advance your existing models and data and can be
considered as “Smart Modeling” for the process industries. Since it is well-known that good
decision-making requires good models and data, IMS is ideally suited to support this
continuous-improvement endeavour. IMS is specifically designed to either co-exist with your
existing design, planning, scheduling, etc. applications or these same models and data can be
used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create
new value-added applications. The following techniques form the basis of our IMS offering.
Steady-State Detection of Phenomenological Variables to Assess Process/Production Stability
IMPL defines phenomenological variables as flows, holdups, yields (quantities), setups,
startups, switchovers, shutdowns (logics), densities, components, properties and conditions
(qualities). IMPL’s steady-state detection (SSD) algorithm (Kelly and Hedengren, 2013) is
useful to determine whether a process, production or plant is steady or stationary and provides
an assessment of whether the system has negligible accumulation as well as its ability to be
regulated and manoeuvred. Once stationary, then steady-state data reconciliation can be used
to detect and identify faulty instrumentation and/or leaks. However, if a process is rarely at
steady-state then this is a possible indication of poor control and/or inadequate disturbance
rejection. The data required to perform SSD are a set (20 to 50 tags) of key process,
production, plant or phenomenological variables (KPV’s) sampled at typically one-minute time
intervals (IMPL-DataAnalysis). Furthermore, if steady-state models need to be calibrated or
regressed, then not using truly steady-state data will result in biased parameter estimates due to
the presence of auto-correlation i.e., the residual errors are not independently and identically
distributed.
Data Reconciliation of Flows, Holdups and Yields to Identify Gross-Errors/Outliers/Defects
As mentioned, once the system is detected to be at steady-state then IMPL’s data reconciliation
solver (Kelly, 2004) can be used to identify flow, holdup and yield gross-errors or anomalies with
relatively simple quantity or material balances. Other phenomenological variables can also be
included such as densities and conditions but this requires extra quantity and quality nonlinear
balances. If a defect or outlier is flagged, then the field meter or analyzer needs to be re-
calibrated, etc. where it is not uncommon to find many instruments that have significant
measurement biases or drifts that need to be addressed or accounted for in some manner
(APA-IMF, APA-FP-IMF and APA-OP-IMF).
Composition Tracking of Feedstocks and Intermediate Materials to Trace their Amounts
One of the most difficult and misunderstood aspects of applying process models off-line or on-
line is the lack of feed and intermediate material compositions. Implementing IMPL’s
2. composition tracking application (Kelly et. al. 2005) essentially performs a dynamic or time-
varying numerical integration of all holdups or inventories over time to track/trace the relative
amounts of each material before it enters or feeds the process (APT-IMF). Pseudo-
components, micro-cuts or hypotheticals used in rigorous process simulators all require this
data in order to properly predict the process’ output quantity and quality variables. Without feed
composition tracking, it is very unlikely that useful predictions from process models will result
even with the best intentions.
Data Regression of Model Coefficients with Densities, Properties, Components and Conditions
Not only are feed and intermediate material compositions necessary but most process models
also require as input, certain process parameters or coefficients. To provide this, IMPL’s data
regression solver (TSE-GFD-IMF) can be used with actual process data to estimate or fit these
coefficient values correctly. Typically these types of process parameters are heat or mass
transfer coefficients, catalyst activities, etc. and require nonlinear regression techniques which
IMPL employs (APM-IMF).
Excess-Model Regression of Existing Models to Extend their Accuracy and Precision
In some cases, it is not possible or extremely difficult to update or re-calibrate existing models
with model coefficients etc. and unfortunately this most likely inhibits their ability to accurately
and precisely predict the necessary process variables. Excess-model regression (XMR-IM) is a
simple technique that IMPL implements to retrofit or revitalize existing models to improve their
predictability. XMR uses the existing model’s predicted values as input and uses other related
process variables to extend, enhance or augment the model with this information. An effective
industrial example of this approach can be found in motor gasoline blending where a nonlinear
or non-ideal blend law with fixed parameters (Ethyl, Dupont or Mobil Transformation Method) is
extended by fitting extra parameters typically called “bonuses” using the component recipes as
regressors or explanatory variables (APE-MGB-IMF).
Design of Experiments for Open/Closed-Loop Dithering to Estimate Better Models
Regressing industrial models using passive or happenstance data may not be rich enough to
estimate good or useful models especially when feedback is omnipresent in the data. In order
to improve this situation, IMPL’s unique dither signal design problem (DSDP-CLE-IMF) can be
easily employed to determine experimental trials that can be run or executed on the system to
significantly improve the industrial data quality. After the dither signal trials have been applied
to excite or stimulate the actual process or production, then better regressed models will result.
In addition, this method can also be used to verify actual first-order derivatives taken directly
from the production or plant where good optimization requires good derivatives.
In summary, IMS should be considered as a vital part to increasing your ability to extract more
value out of your existing industrial models and data. It should also be emphasized that these
methods should be maintained and applied on a regular basis for continuous-improvement and
sustainability of the applications (Kelly and Zyngier, 2008) in any industrial environment. In
addition, these techniques can be implemented before the installation of any new application or
extension in order to provide a benchmark or reference-point for its expected future profit and/or
performance benefits.
And finally, the methodology outlined here is consistent with the recent concepts of Smart
Manufacturing, Industry 4.0 and Smart Plant where Christofides et. al. (2007) state in their
3. conclusion (a) the following requirement: “the development of easy-to-use software that makes
system modeling and control routine and easy-to-incorporate in the chemical engineering
curriculum, as well as in an industrial environment”. IMS (Smart Modeling) is in our opinion, a
step in this direction.
Please contact Alkis Vazacopoulos (alkis@industrialgorithms.com) to obtain a quote for IMS and IMPL’s
development and deployment licences as well as special pricing for IBM’s CPLEX LP, QP and MILP solvers
which are tightly integrated with IMPL to solve industrially significant discrete and nonlinear types of
problems.
References
Kelly, J.D., "Techniques for solving industrial nonlinear data reconciliation problems",
Computers and Chemical Engineering, 2837, (2004).
Kelly, J.D., Mann, J.L., Schulz, F.G., "Improve accuracy of tracing production qualities using
successive reconciliation", Hydrocarbon Processing, April, (2005).
Christofides, P.D., Davis, J.F., El-Farra, N.H., Clark, D., Harris, K.R.D., Gipson, J.N., “Smart
plant operations: vision, progress and challenges”, American Institute of Chemical Engineering
Journal, 2734-2741, (2007).
Kelly, J.D., Zyngier, D., "Continuously improve planning and scheduling models with parameter
feedback", FOCAPO 2008, July, (2008).
Kelly, J.D., Hedengren, J.D., "A steady-state detection (SDD) algorithm to detect non-stationary
drifts in processes", Journal of Process Control, 23, 326, (2013).
IAL, “Advanced production accounting industrial modeling framework (APA- IMF), Slideshare,
July, 2013.
IAL, “Advanced property tracking/tracing industrial modeling framework (APT- IMF), Slideshare,
July, 2013.
IAL, “Advanced process monitoring industrial modeling framework (APM- IMF), Slideshare, July,
2013.
IAL, “Advanced production accounting of a flotation plant industrial modeling framework (APA-
FP- IMF), Slideshare, August, 2014.
IAL, “Advanced production accounting of an olefins plant industrial modeling framework (APA-
OP- IMF), Slideshare, August, 2014.
IAL, “Time series estimation of gas furnace data industrial modeling framework (TSE-GFD-
IMF), Slideshare, August, 2014.
IAL, “Data analysis by checking, clustering and componentizing in IMPL (IMPL-DataAnalysis),
Slideshare, September, 2014.
IAL, “Advanced parameter estimation for motor gasoline blending (MGB) industrial modeling
framework (APE-MGB- IMF), Slideshare, November, 2014.
4. IAL, “Excess/x-model regression to extend the accuracy and precision of existing industrial
models (XMR-IM), Slideshare, November, 2014.
IAL, “Dither signal design problem for closed-loop estimation industrial modeling framework
(DSDP-CLE-IMF), Slideshare, December, 2014.
http://en.wikipedia.org/wiki/Industry_4.0, accessed December, 2014.