SlideShare a Scribd company logo
1 of 11
Download to read offline
Chemical Engineering and Processing 44 (2005) 785–795
ANN based estimator for distillation—inferential control
Vijander Singh∗, Indra Gupta, H.O. Gupta
Electrical Engineering Department, Indian Institute of Technology Roorkee, Roorkee, Uttaranchal 247667, India
Received 1 September 2003; received in revised form 11 February 2004; accepted 11 August 2004
Available online 19 November 2004
Abstract
Typical production objectives in distillation process require the delivery of products whose compositions meet certain specifications. The
distillation control system, therefore, must hold product compositions as near the set points as possible in the faces of upset. Distillation
column is generally subjected to disturbances in the feed and the control of product quality is often achieved by maintaining a suitable tray
temperature near its set point. Secondary measurements are used to adjust the values of the manipulated variables, as the controlled variables
are not easily measured or not economically viable to measure (inferential control).
In the present paper, an artificial neural network (ANN) based estimator to estimate composition of the distillate is proposed. Nowadays with
the advent of digital computers, the demand of the time is to amalgamate the control of various variables to achieve the best results in optimum
time. It is therefore required to monitor all the desired variables and perform the control action (feed forward, feed back and inferential) as
per algorithm adopted. The developed estimator is tested and the results are compared. The comparison shows that the predictions made by
the neural network are in good agreement with results of simulation.
© 2004 Elsevier B.V. All rights reserved.
Keywords: Inferential control; Distillation control system; Artificial neural network
1. Introduction
The distillation control system must hold product compo-
sition as near the set point(s) as far as possible in the faces of
upsets. The disturbances are generally in feed. The control
is difficult because the product quality cannot be measured
economically on line. This is because the instrumentation
is either very expensive and/or measurement lags and sam-
pling delays make impossible to design an effective control
system. A solution to this problem is the use of secondary
measurements in conjunction with a mathematical model of
the process to estimate the product quality.
An estimator predicts product quality from a linear com-
bination of process input and output measurements. The con-
trol strategy is to use selected measurements of both process
inputs and outputs to estimate the effect of measured and
unmeasured disturbances on the product quality, and then
∗ Corresponding author. Tel.: +91 1332 284294; fax: +91 1332 285231.
E-mail address: vijaydee@iitr.ernet.in (V. Singh).
to use a standard control system to adjust the control effort
so as to maintain the product quality at the desired level.
This strategy reduces approximately to that of a feed forward
control system when there are no measurements of process
outputs. Application of the estimator to a simulated multi-
component distillation column shows that the composition
control achieved with an estimator based on temperature,
reflux and steam flow measurements is comparable to that
achieved instantaneous composition measurements.
The estimated composition may be used in a control
scheme to determine valve position directly, or it may be
used to manipulate the set point of a temperature controller
as in parallel cascade control. This is the notion behind infer-
ential control developed by Joseph and Brosilow [5] (1978).
The inferential control scheme uses measurements of sec-
ondary outputs, in this instance, selected tray temperatures,
and manipulated variables to estimate the effect of unmea-
sured disturbances in the feed on product quality. The es-
timated product compositions are then used in a scheme to
achieve improved composition control. Use of large digital
0255-2701/$ – see front matter © 2004 Elsevier B.V. All rights reserved.
doi:10.1016/j.cep.2004.08.010
786 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795
computers for distillation calculations was not investigated
up to 1958, although the high speed of computation seemed
to offer economies and present the opportunity of making cal-
culations not otherwise possible. Amundson and Pontinen [1]
in 1958, introduced the use of digital computers to solve the
distillation column problem. For general multi-component
mixtures the coefficients depend in a highly non-linear fash-
ion on compositions also, thus, solution becomes difficult.
The solution obtained should be available for comparison
and should be accurate. This is made possible with the help
of large digital computer.
Choe and Luyben [2] in 1987, took up rigorous dynamic
model of Distillation Column. Most of the dynamic models
assume two simplifications namely negligible vapor holdup
and constant pressure. But in this paper it was demonstrated
that these assumptions lead to erroneous predictions of dy-
namic responses. It happens when pressure of column is
high (i.e. greater than 10 atmosphere) and when column
pressures are low (i.e. vacuum columns). In 1990, Rovaglio
et al. [3] solved the distillation column problem with the
help of rigorous model. Rigorous model is reliable for prac-
tical purposes. An industrial example was taken to show
practical implementation and real economic value of feed
forward control. Feed forward control action reduces the
inherent error when feedback control structure is used to
infer composition. When process dead times are large and
load upsets are frequent and when high quality is required
feedback control cannot serve the purpose alone, then feed
forward control is required to evaluate proper value of ma-
nipulated variables so as to cancel the effects of input varia-
tions.
The control of many industrial processes is difficult be-
cause online measurement of product quality is compli-
cated. This is due to the non-existence of measurement
technology. Weber and Brosilow in 1972 [4] cited one so-
lution to this problem by using secondary measurements
in conjunction with a mathematical model of the process
to estimate product quality. The method includes proce-
dures for selecting the available output measurement to
get an estimator, which is relatively insensitive to model-
ing error and measurement noise. The estimator developed
for control of multi-component distillation column is based
on temperature, reflux and steam flow measurements. The
control achieved with the estimator is comparable to that
achieved with instantaneous composition measurements and
is far superior to composition control achieved by maintain-
ing a constant temperature on any single stage of the col-
umn.
The Weber et al. [4] have designed an estimator in three
steps:
(1) The selection of the appropriate measurements from
those available.
(2) The inversion of the process model so as to obtain an
estimate of the unmeasured process disturbances from
the measurements.
(3) Application of the process model so as to map the esti-
mated and measured process inputs into the estimate of
product quality.
Finally, this model was tested for its validity to 16 stages
distillation column. More important is to develop algorithms
for selecting a subset of the available process output measure-
ments, which will be most appropriate. Joseph and Brosilow
[5] in 1978, presented a method for designing an estimator
to infer unmeasurable product qualities from secondary mea-
surements. The secondary measurements are selected so as
to minimize the number of such measurements required to
obtain an accurate estimate. The application of design proce-
dures to design a static inferential control system to control
product composition is described. Then the dynamic struc-
ture of linear inferential control system term is discussed.
Also the rigorous methods for the design of sub optimal dy-
namic estimators are discussed.
In 1991 and 1992, Marmol and Luyben [6,7] presented
an inferential model based control of multi-component batch
distillation. The model used is described in the paper and two
approaches were explored to estimate the distillate composi-
tion: a rigorous steady state estimator and a quasi-dynamic
non-linear estimator. The models developed provide good
estimation of the distillate composition using only one tem-
perature measurement. Bhagat in 1990 [8], discussed briefly
the neural networks. Two examples were taken to demon-
strate their practical application, these involved CSTR’s. In
the first one, the change in concentration of outlet stream
with the changes in inlet stream concentration was studied.
The second example involved the identification of degree of
mixing in a reactor or vessel.
In 1994, Morris et al. [9] examined the contribution that
various network methodologies can make to the process mod-
eling and control toolbox. Feed forward networks with sig-
moidal activation functions, radial bases function networks
and auto associative networks were reviewed and studied us-
ing data from industrial processes. Finally, the concept of
dynamic networks was introduced with an example of non-
linear predictive control. MacMurray and Himmelblau [13]
in 1994, described the modeling of packed distillation column
with artificial neural network (ANN) and provide a example
of complex modeling. The change in the sign of the gain
was observed under various operating conditions [13]. Ou
and Rhinehart [14] demonstrated a parallel model structure
for general non-linear model predictive control. The model
comprises of a group of sub-models, each providing predic-
tion of one process at one selected future point in time. The
neural network is used for each sub-model and terms the
prediction model as a grouped neural network (GNN). The
work demonstrates implementation of grouped neural net-
work model predictive control (GNNMPC) on a non-linear,
multivariable, constrained pilot scale distillation unit [14].
Tamura and Tateishi [15] have discussed the capabilities
of a neural network with a finite number of hidden units and
shown with the support of mathematical proof that a four-
V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 787
layered feed forward network is superior to three layered
feed forward network in terms of the number of parameters
needed for the training data. Kung and Hwang [16] proposed
algebraic projection analysis and provide an analytical so-
lution for optimal hidden units size and learning rate of the
back propagation neural networks. Murata et al. [17] have
investigated the problem of determining the optimal num-
ber of parameters in neural network from statistical point of
view. The proposed new information criterion (NIC) therein
measures the relative merits of two models having the same
structure but different number of parameters and concludes
whether more number of neurons should be added to the net-
work or not. Kano et al. [18] presented a control scheme to
control the product composition in a multi-component dis-
tillation column. The distillate and bottom compositions are
estimated from online measured process variables. The infer-
ential models for estimation product compositions are con-
structed using dynamic partial least squares (PLS) regression,
on the base of simulated time series data. From the detailed
dynamic simulation results, it is found that the cascade con-
trol system based on a proposed dynamic (PLS) model works
much better than the usual tray temperature control system.
Kano et al. [19] proposed a new inferential control scheme
termed as “Predictive Inferential Control”. In predictive in-
ferential control system, future compositions predicted from
online measured process variables are controlled instead of
the estimates of current compositions. The key concept is to
realize the feed back control with a feed forward effect by
the use of inherent nature of a distillation column.
An approach to fault detection is described by Brydon et
al. [20] which uses neural network pattern classifiers trained
using data from a rigorous differential equation based simula-
tion of a pilot plant column. Two case studies were presented,
both considering only plant data. For two classes of process
data,aneuralnetworkandaK-meansclassifierbothproduced
excellent diagnoses. For additional three classes of plant op-
eration, a neural network again provides accurate classifica-
tions, while a K-means classifier failed to categories the data
[20]. Sbarbaro et al. [21] presented the traditional approach
to include multi-dimensional information into conventional
control systems and proposed a new structure based on pat-
tern recognition. The artificial neural networks and finite state
machines as a frame work for designing the control system is
used. Bakshi and Stephanopoulos [22] derived a methodol-
ogy for pattern based supervisory control and fault diagnosis,
based on multi-scale extraction of trends from process data.
An explicit mapping is learned between the features extracted
at multiple scales, and the corresponding process conditions
using the technique of induction by decision trees.
Taking advantage of technique developed by Kolmogorov,
Kurkova [23] provided a direct proof of the universal approx-
imation capabilities of perceptron type network with two hid-
den layers. Lippmann [24] demonstrated the computational
power of different neural net models and the effectiveness
of simple error correction training procedures. Single and
multi layer perceptrons, which can be used for pattern clas-
sification, are described as well as Kohonen’s feature map
algorithm, which can be used for clustering or as a vector
quantizer.
2. Simulation algorithm
The realistic distillation column [12] consists of non-ideal
column with NC components, non-equimolal overflow, and
inefficient trays. In present paper following assumptions are
made for developing the model.
(1) Liquid on the tray is perfectly mixed and incompressible.
(2) Tray vapor holdups are negligible.
(3) Dynamics of the condenser and the reboiler is neglected.
(4) Vapor and liquid are in thermal equilibrium but not in
phase equilibrium. The departure from phase equilibrium
is described by Murphree vapor efficiency.
Under these assumptions, the steady state operation of
each module is considered by the following equations, com-
monly referred to as the MESH equations. [MESH = material
balance equations, efficiency relations, summation equation,
and heat (enthalpy) balance equations]. Here, the stage num-
ber i takes integer values from 1 to NT.
Li+1 + Vi−1 − Li − Vi = 0
(material balance equations) (1)
yi − yi−1 = ηij[y∗
i (xi, Ti, pi) − yi−1]
(stage efficiency relations) (2)
where
yi =
vi
Vi
and xi =
li
Li
Li =
NC

j=1
lij (summation equations) (3)
Vi =
NC

j=1
vij (4)
Li+1hi+1 + Vi−1Hi−1 − Lihi − Vihi = 0
(enthalpy balance equation) (5)
Eqs. (1)–(5) are used to represent an equilibrium condenser
and an equilibrium reboiler by the removal of variables corre-
sponding to a liquid stream above the condenser and a vapor
stream below a reboiler, and the inclusion of condenser and
reboiler heat duties Qc and QB in the respective enthalpy
balance equations.
For the simulation of a distillation column the quantities
[10], such as feed composition, flow rate, temperature and
788 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795
pressure, column pressure, stage efficiencies are assumed to
be specified.
The basic steps of the algorithm reflecting the above as-
sumption for the simplified multi-component distillation col-
umn are:
Step 1: Input data for column size, components, physical
properties, feeds, and initial conditions (liquid composi-
tions, liquid flow rates and temperatures on all trays).
Step 2: Calculate initial tray holdups and the pressure pro-
file.
Step 3: Calculate the temperatures and vapor compositions
from the vapor–liquid equilibrium data.
Step 4: Calculate liquid and vapor enthalpies.
Step 5: Calculate vapor flow rates on all trays, starting in
the column base, using the algebraic form of the energy
equations.
Step 6: Evaluate all derivatives of the component continuity
equations for all components on all trays plus the reflux
drum and the column base.
Step 7: Integrate all ODEs (using Euler’s method).
Step 8: Calculate new total liquid holdups from the sum of
the component holdups. Then calculate the new liquid mole
fraction from the component holdups and the total holdups.
Step 9: Calculate new liquid flow rates from the new total
holdups for all trays.
Step 10: Go to step 3 for the next step.
The case under study is a multi-component system (Fig. 1)
(five components) with constant relative volatility through-
out the column and hundred percent efficient trays i.e. the
vapor leaving is in equilibrium with the liquid on the tray. A
single feed stream is fed as saturated liquid on to feed tray NF
(NF = 5). The feed flow rate is F (kmols/h) and composition is
z (mole fraction). The overhead vapor is totally condensed in
a condenser and flows in to the reflux drum, whose holdup of
liquid is MD (kmols). The contents of the drum is assumed to
be perfectly mixed with composition xD (mole fraction). The
liquid in the drum is at it’s bubble point. Reflux is pumped
back to the top tray NT (NT = 15) of the column at a rate R
(kmols/h). Overhead distillate product is removed at a rate D
(kmols/h). At the base of the column, liquid bottoms product
is removed at rate B (kmols/h) and with a composition xB
(mole fraction). The vapor boilup is generated in the reboiler
at rate V (kmols/h).
The algorithm presented is translated into a program using
C language for the distillation column discussed. The main
objective of the above simulation program is to generate pat-
terns. In order to vary reboiler duty QB (KJ/h) for obtaining
various patterns, the following equation is used:
QB = QB + ran(i) (6)
where ran(i) is a random number generated using a library
function srand(). The ran(i) is generated so that it ranges
0.013–0.881. The change in the reboiler duty changes the
temperature profile of the column. With this changed tem-
perature profile we get a changed distillate quality. In this
way, 130 patterns of temperature profile and respective dis-
tillate compositions are generated. These are then used for
training and testing a neural network model.
3. Artificial neural network modeling
3.1. Neuron model
A neuron model consists of a processing element [11] with
synaptic input connections and a single output. The signal
flow of neuron inputs xni is considered to be unidirectional
as indicated by arrows as in a neuron’s output signal flow. A
general neuron symbol is shown in Fig. 2.
The neuron’s output signal is given by the following rela-
tionship
o = f(wt
xn) or o = f
 n

i=1
wixni

(7)
where w is weight vector defined as
w 
=
[w1 w2 . . . wn ]
t
and xn is the input vector
xn 
=

xn1 xn2 · · · xnn
t
The function f(wt xn) is often referred to as an activation func-
tion. The variable net is defined as a scalar product of the
weight and the input vector.
net 
=
wt
xn (8)
Using Eq. (8) in Eq. (7), we get
o = f(net) (9)
It is observed from Eq. (7) that the neuron as processing
node performs the operation of summation of its weighted in-
puts. Subsequently, it performs the non-linear operation f(net)
through its activation function. Typical activation functions
used are
f(net) 
=
2
1 + exp(−λ net)
− 1 (10)
and
f(net) 
=

+1 · · · net  0
−1 · · · net  0
(11)
where λ  0 in Eq. (10) is proportional to neuron gain deter-
mining the steepness of the continuous function f(net) near
net = 0.
By shifting and scaling the bipolar activation function de-
fined by Eqs. (10) and (11), unipolar activation function can
be obtained as
f(net) 
=
1
1 + exp(−λnet)
(12)
V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 789
Fig. 1. Schematic diagram of distillation column with instrumentation and control component.
790 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795
Fig. 2. General symbol of neuron.
Fig. 3. Single layers network with continuous perceptron.
and
f(net) 
=

+1 · · · net  0
0 · · · net  0
(13)
3.2. Delta learning rule for multi-perceptron layer
The back propagation-training algorithm allows experi-
ential acquisition of input output mapping knowledge within
multilayer networks. Input patterns are submitted during the
back propagation training sequentially. If a pattern is submit-
ted and its classification or association is determined to be
erroneous, the synaptic weights as well as the thresholds are
adjusted so that the current least mean square classification
error is reduced. The input output mapping comparison of tar-
get and actual values and adjustment, if needed, continue until
all mapping examples from the training are learned within an
acceptable over all error.
During the association or classification phase the trained
neural network itself operate in a feed forward manner. How-
ever, the weight adjustment enforced by the learning rule
propagates exactly backwards from the output layer to the
hidden layer towards the input layer. To formulate the learn-
ing algorithm the simple continuous perceptron network in-
volving K neuron will be considered as shown in Fig. 3 .
o = Γ

Wyn

(14)
where the input and output vector and the weight matrix are
yn =





yn1
yn2
.
.
.
ynJ





o =





o1
o2
.
.
.
oK





W =





w11 w12 · · · w1J
w21 w22 · · · w2J
.
.
.
.
.
.
.
.
.
.
.
.
wK1 wK2 · · · wKJ





and the non-linear diagonal operator Γ [•] is
Γ [•] =





f(•) 0 · · · 0
0 f(•) · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · f(•)





and the desired output vector is
d 
=





d1
d2
.
.
.
dK





netk = Wyn (15)
The generalized error expression include all squared errors
at outputs k = 1, 2, . . ., K.
Ep =
1
2
K

k=1
(dpk − opk)2
=
1
2
dp − op
2
(16)
for a specific pattern p, where p = 1, 2, . . ., P
Let us assume that the gradient decent search is performed
to reduce the error Ep through the adjustment of weights.
Requiring the weight adjustment we compute individual
weight adjustment as follows:
wkj = −η
∂E
∂wkj
(17)
where the error E is defined in Eq. (16) for each node in layer
k, k = 1, 2, . . ., K, we can write using Eq. (15)
netk =
J

j=1
wkjynj (18)
and further using Eq. (14) the neuron’s output is
ok = f(netk) (19)
The error signal term δ is called delta produced by the kth
neuron is defined for this layer as follows:
δok 
=
−
∂E
∂(netk)
(20)
It is obvious that the gradient component ∂E/∂Wkj depends
only on the netk of a single neuron, since the error at the output
of the kth neuron is contributed to only by the weights wkj,
V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 791
for j = 1, 2, . . ., J for fixed k value. Thus, using the chain rule
we may write
∂E
∂wkj
=
∂E
∂(netk)
×
∂(netk)
∂wkj
(21)
The second term of product of Eq. (21) in the derivative of
the sum of products of weights and patterns as in Eq. (18).
Since the values of ynj, for j = 1, 2, . . ., J are constant for a
fixed pattern at the input, we obtain
∂(netk)
∂wkj
= ynj (22)
Combining Eqs. (20) and (22) leads to the following form
for Eq. (21)
∂E
∂wkj
= −δokynj (23)
The weight adjustment formula Eq. (17) can be rewritten
using the error signal δok term as below:
wkj = ηδokynj for k = 1, 2, . . . , K and
j = 1, 2, . . . , J (24)
The expression Eq. (24) represents the general formula for
delta training/learning weight adjustments for a single layer
network. It can be noted that wkj in Eq. (24) does not depend
upon the form of an activation function.
To adapt the weights, the error signal term delta δok intro-
duced in Eq. (20) needs to be computed for the kth continuous
perceptron. E is a composite function of netk, therefore, it can
be expressed for k = 1, 2, . . ., K
E(netk) = E[ok(netk)] (25)
Thus, from Eq. (20)
δok = −
∂E
∂ok
×
∂ok
∂(netk)
(26)
Denoting the second term in Eq. (26) as a derivative of acti-
vation function
f
k(netk) 
=
∂ok
∂(netk)
(27)
and noting that
∂E
∂ok
= −(dk − ok) (28)
allows rewriting formula Eq. (26) as follows:
δok = (dk − ok)f
k(netk) for k = 1, 2, . . . , K (29)
Eq. (29) shows that the error signal term δok depicts the local
error (dk − ok) at the output of the kth neuron scaled by the
multiplicative factorf
k(netk), which is the slope of the acti-
vation function computed at the following excitation value
netk = f−1
(ok) (30)
The final formula for the weight adjustment of the single
layer network can now be obtained from Eq. (24) as
wkj = η(dk − ok)f
k(netk)ynj (31)
The updated weight values become
w
kj = wkj + wkj for k = 1, 2, . . . , K
j = 1, 2, . . . , J (32)
Formula Eqs. (31) and (32) refers to any form of non-linear
and differentiable activation function f(net) of the neuron.
Let us examine the following two commonly used delta
training rules for the two selected typical activation functions
f(net).For the unipolar continuous activation function defined
in Eq. (12) f(net) can be obtained as
f
(net) =
exp(−net)
[1 + exp(−net)]2
(33)
This can be rewritten as
f
(net) =
1
1 + exp(−net)
×
1 + exp(−net) − 1
1 + exp(−net)
(34)
Again using Eq. (12) in Eq. (34), we get
f
(net) = o(1 − o) (35)
Delta value of the Eq. (29) for this activation function can
be rewritten as
δok = (dk − ok)ok(1 − ok) (36)
Summarizing the above discussion, the updated individual
weights under the delta learning rule can be expressed for
k = 1, 2, . . ., K and j = 1, 2, . . ., J as follows:
w
kj = wkj + η(dk − ok)ok(1 − ok)ynj (37)
for
ok =
1
1 + exp(−netk)
The updated weights under the delta learning rule for the
single layer network can be expressed using vector notation
as
W
= W + ηδoynt
(38)
where the error signal vector δo is defined as the column
vector consisting of the individual error signal terms.
δo 
=





δo1
δo2
.
.
.
δoK





792 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795
4. Proposed ANN based estimator for distillation
column
The ANN model has forward flowing information in pre-
dictive mode  back-propagated error corrections in learning
mode. Such nets are usually organized into layer of neurons;
connections are made between neurons of adjacent layers. A
neuron is such connected that it receives signals from each
neuron in immediately succeeding layer. An input layer re-
ceives input. One or more intermediate layers (also called
hidden layers) lie between the input and output layer, which
communicates results externally. ANN based estimator de-
veloped for a distillation column assumes a mixture of NC
components and NT number of trays, the column reboiler in
the bottom and a condenser on the top. An estimator is pro-
posed to estimate the distillate quality from the temperature
profile of the column. We have NT + 2 temperature inputs for
the NT trays, a reflux drum, and the reboiler. The output con-
sists of NC liquid compositions and NC vapor compositions
i.e. 2 × NC outputs. The estimator contains NT + 2 input neu-
rons and 2 × NC output neurons. An input vector of NT + 2
elements (temperature profile of the column) is given to the
input layer of the network. Weights are initially randomized
when the net undergoes training the errors between the re-
sults of the output neurons and the desired corresponding
target values are propagated backward through the net.
The backward propagation of error signals is used to up-
date the connection weights. Finally, a network is achieved
which can predict the output for any input vector. The input
neurons transform the input signal and transmit the resulting
Fig. 4. Proposed neural network for the distillation column.
value to the hidden layer. Each neuron in the hidden layers
individually sums the signals they receive together with the
weighted signal from bias neuron and transmit the result of
each of the neurons in the next layer. Ultimately, the neurons
in the output layer receive weighted signals from neurons
in the penultimate layer sum the signals and emit the trans-
formed sums as output from the net. The output vector is
composed of 2 × NC composition outputs of the distillate.
The temperature profile of the trays in distillation column
is highly non-linear as the system is very complex by having
five-component mixture. To incorporate the non-linearities
in ANN model of these patterns three hidden layers are used
in the proposed estimator. Further for three hidden layers ac-
ceptable accuracy is achieved and increasing the number of
hidden layers beyond three no further improvement in accu-
racy is achieved. Also for less than three hidden layers the
accuracy is not acceptable. The trained network with three
hidden layers is then used to estimate the distillate compo-
sition for any given temperature profile of the distillation
column.
5. Comparison of results
Proposed artificial neural network based estimator is
tested for 15-tray column with a reboiler and a reflux drum
with five component mixture. The 20 temperature profiles
and the corresponding distillate composition used for testing
are the one not used in training. The results obtained with
the help of ANN based estimator are compared with the re-
V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 793
Fig. 5. Liquid composition of components with reboiler temperature.
sults of simulation as obtained using semi rigorous model.
The results for distillate compositions are shown in Fig. 5
and Fig. 6. As seen from Fig. 5 and Fig. 6 the estimated com-
position that from proposed ANN based estimator is close to
the one obtained from semi rigorous model. In Figs. 5 and 6,
the composition of liquid xd5 and vapor compositions yd4 and
yd5, respectively are zero in the distillate product.
6. Discussions and conclusions
The distillate product of distillation control system must
hold composition as near the set point(s) as far as possible
in the faces of upsets. The disturbances are generally in the
flow and composition of feed. The control of the product
composition is difficult because the product quality cannot
be measured economically on line. This is because the in-
strumentation is either infeasible and/or measurement lags
and sampling delays make impossible to design an effective
control system. This problem is solved by using of secondary
measurements in conjunction with a mathematical model of
the process to estimate the product quality. An artificial neural
network based estimator developed here can be used for the
inferential control of distillation column. The developed es-
timator control strategy with minimal computational burden
and high speed can be proposed for the distillation control
system, which is generally non-linear in nature.
As for simulation study program discussed a 15-tray col-
umn with a reboiler and a reflux drum with five-component
mixture is considered for testing the estimator. One hundred
and thirty input-output patterns are generated using simula-
tionprogramandareusedfortrainingthedevelopedestimator
of Fig. 4. Out of the above-generated patterns some of them
are used for testing purpose. Temperature profile taken as in-
put vector consisted of 17 temperature entries of 15 trays,
reboiler and reflux drum. The output vector of the estimator
is constituted by five liquid and five vapor distillate compo-
sitions for the mixture considered. Also the estimator’s input
vector consisted of 17 elements and output vector had 10 ele-
ments. A 5-layered network model is taken with [17, 10, 35,
35, 35] configuration i.e. 17 input neurons, 10 output neurons
and 35 neurons in each of the three hidden layers. The net-
work is trained using 110 patterns and 20 test inputs are given
for testing. Training the estimator took about 60,000 × 110
iterations and about 45 h.
It is observed on 1.2 GHz, Intel Pentium-IV processor, that
developed simulation program takes 0.16 s for its execution
and developed ANN based estimator takes 0.05 s for the same
Fig. 6. Vapor composition of components with reboiler temperature.
794 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795
process, thus, the total time saving of 68.75% can be achieved
using ANN model, without sacrificing the accuracy.
Appendix A. Nomenclature
f(net) activation function
Γ [•] a non-linear diagonal operator
δo error signal vector
δok error signal vector produced by kth neuron
δyj error signal term produced by jth neuron of hidden
layer having output y
v weight increment for hidden layer of neurons
w weight increment for input layer of neurons
f
y column vector for hidden layers
ηi
v vaporization efficiency
η learning parameter (positive constant)
E error gradient vector
ηij Murphree stage efficiency
B bottom product rate (kmols/h)
d desired output vector
dp desired output vector for pth pattern
di desired output from ith neuron
dpk desired output from kth neuron for pth pattern
D distillate product rate (kmols/h)
Ep least squared error for pth pattern
Fi total feed flow rate into ith tray (kmols/h)
hF total molar enthalpy of feed (kJ/kmol)
hfij component feed enthalpy (kJ/kmol)
hi total molar enthalpy of liquid mixture (kJ/kmol)
Hi total molar enthalpy of vapor (kJ/kmol)
hlij component liquid enthalpy (kJ/kmol)
HNi,j hidden neuron for ith hidden layer and jth node
Hvij component vapor enthalpy (kJ/kmol)
INB input neuron for reboiler temperature
IND input neuron for reflux drum temperature
INI input neuron for ith tray temperature
K, L, M number of neurons in three hidden layers respec-
tively
Kij equilibrium constant
Li total liquid flow rate leaving the tray (kmols/h)
lij component liquid flow rate leaving the ith tray
(kmols/h)
MB liquid molar holdup in reboiler (kmols)
MD liquid molar in reflux drum
Mi liquid molar holdup on ith tray (kmols)
NC number of components
net scalar product of weight vector and input vector
netI scalar product of ith weight vector and input vector
NT total number of trays in distillation column
O output vector of neuron
Ok kth output of neurons processing node
ONi output neuron for ith output
QB reboiler heat duty (KJ/h)
QC condenser heat duty (KJ/h)
R reflux rate (kmols/h)
vn updated weights of hidden layer
vnij connection weights of ith node of one layer to jth
node of preceding layer
vn weight vector of hidden layer
V weight matrix of hidden layer
Vi total vapor flow rate from the tray (kmols/h)
vij component vapor flow rate from the tray (kmols/h)
w multiplicative weight vector
wi multiplicative weight for ith input
w updated weights of input layer
wij multiplicative weights for input to ith neuron from
jth input element
W weight matrix
x liquid composition of more volatile component
(mole fraction)
xFij component liquid composition of jth component in
feed (mole fraction)
xij liquid composition if jth component on ith tray
(mole fraction)
xn input vector to neuron
xni ith input to neuron
y vapor composition of more volatile component
(mole fraction)
y* equilibrium vapor composition of more volatile
component (mole fraction)
yij vapor composition of jth component on ith tray
(mole fraction)
yij
* equilibrium vapor composition of jth component on
ith tray (mole fraction)
yn input vector to neuron layer
References
[1] N.R. Amundson, A.J. Pontinen, Multicomponent distillation calcu-
lations on a large digital computer, Ind. Eng. Chem. 50 (5) (1958)
730–736.
[2] Y.-S. Choe, W.L. Luyben, Rigorous dynamic models of distillation
columns, Ind. Eng. Chem. Res. 26 (10) (1987) 2158–2161.
[3] M. Rovaglio, E. Ranzi, G. Biardi, M. Fontana, R. Domenichini,
Rigorous dynamic and feed forward control design for distillation
process, AIChE J. 36 (4) (1990) 576–586.
[4] R. Weber, C. Brosilow, The use of secondary measurements to im-
prove control, AIChE J. 18 (3) (1972) 614–627.
[5] B. Joseph, C.B. Brosilow, Inferential control of process. Part I:
Steady state analysis and design. Part 2: The structure and dy-
namics of inferential control systems. Part 3: Construction of
suboptimal dynamic estimators, AIChE J. 24 (3) (1978) 485–
509.
[6] E.Q. Marmol, W.L. Luyben, C. Geogarkis, Application of an
extended luenberger observer to the control of multi-component
batch distillation, Ind. Eng. Chem. Res. 30 (8) (1991) 1870–
1880.
[7] E.Q. Marmol, W.L. Luyben, Inferential model based control of
multi-component batch distillation, Chem. Eng. Sci. 47 (1992) 887–
898.
[8] P. Bhagat, An introduction to neural nets, Chem. Eng. Prog. (1990)
55–60.
V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 795
[9] A.J. Morris, G.A. Montague, M.J. Willis, Artificial neural networks:
studies in process modeling and control, Trans. I Chem. E 72 (Part
A) (1994) 3–19.
[10] L.W. Luyben, Process Modeling, Simulation and Control for Chem-
ical Engineers, Mcgraw Hill International Editions, Chemical Engi-
neering Series.
[11] J.M. Zurada, Introduction to Artificial Neural Systems, Jaico Pub-
lishing House.
[12] P.B. Deshpande, Distillation Dynamics and Control, Instrument So-
ciety of America, Tata McGraw Hill Publishing Co. Ltd.
[13] J.C. MacMurray, D.M. Himmelblau, Modeling and control of a
packed distillation column using artificial neural networks, Comput.
Chem. Eng. 19 (10) (1995) 1088.
[14] J. Ou, R.R. Rhinehart, Grouped neural network model predictive
control, Control Eng. Pract. 11 (2003) 723–732.
[15] S. Tamura, M. Tateishi, Capabilities of a four layered feed forward
neural network: four layers versus three, IEEE Trans. Neural Net-
works 8 (2) (1997) 251–255.
[16] S.Y. Kung, J.N. Hwang, An Algebraic Projection Analysis for Op-
timal Hidden Units Size and Learning Rates in Back Propagation
Learning, Princeton University, Department of Electrical Engineer-
ing, Princeton, NJ 08544, U.S.A.
[17] N. Murata, S. Yoshizawa, S. Amari, Network information criterion-
determining the number of hidden units foe an artificial neural
network model, IEEE Trans. Neural Networks 5 (6) (1994) 865–
872.
[18] M. Kano, N. Showchaiya, S. Hasebe, I. Hashimoto, Inferential
control system of distillation composition using dynamic par-
tial least squares regression, J. Process Control 10 (2000) 157–
166.
[19] M. Kano, N. Showchaiya, S. Hasebe, I. Hashimoto, Inferen-
tial Control of Distillation Composition: Selection of Model and
Control Configuration, Control Eng. Pract. 11 (8) (2003) 927–
933.
[20] D.A. Brydon, J.J. Cilliers, M.J. Willis, Classifying pilot plant dis-
tillation column faults using neural networks, Control Eng. Pract. 5
(10) (1997) 1373–1384.
[21] D. Sbarbaro, P. Espinoza, J. Araneda, A pattern based strategy for
using multidimensional sensors in process control, Comput. Chem.
Eng. 27 (2003) 1943.
[22] B.R. Bakshi, G. Stephanopoulos, Representation of process trends.
IV. Introduction of real time patterns from operating data for diag-
nosis and supervisory control, Comput. Chem. Eng. 18 (4) (1994)
303–332.
[23] V. Kurkova, Kolmogorov’s Theorem and multi layer neural networks,
Neural Networks 5 (1992) 501–506.
[24] R.P. Lippmann, Neural Nets for Computing, Lincola Laboratory,
M.I.T, Lexington, MA 02173, U.S.A.

More Related Content

Similar to singh2005.pdf

Model-based Approach of Controller Design for a FOPTD System and its Real Tim...
Model-based Approach of Controller Design for a FOPTD System and its Real Tim...Model-based Approach of Controller Design for a FOPTD System and its Real Tim...
Model-based Approach of Controller Design for a FOPTD System and its Real Tim...IOSR Journals
 
Development of a PI Controller through an Ant Colony Optimization Algorithm A...
Development of a PI Controller through an Ant Colony Optimization Algorithm A...Development of a PI Controller through an Ant Colony Optimization Algorithm A...
Development of a PI Controller through an Ant Colony Optimization Algorithm A...LucasCarvalhoGonalve
 
Model Based Embedded Control System Design for Smart Home
Model Based Embedded Control System Design for Smart HomeModel Based Embedded Control System Design for Smart Home
Model Based Embedded Control System Design for Smart HomeIRJET Journal
 
Statistical process control
Statistical process controlStatistical process control
Statistical process controleSAT Journals
 
gonzales_wesley_ENGR3406_FINAL_PROJECT
gonzales_wesley_ENGR3406_FINAL_PROJECTgonzales_wesley_ENGR3406_FINAL_PROJECT
gonzales_wesley_ENGR3406_FINAL_PROJECTWesley Gonzales
 
Optimised control using Proportional-Integral-Derivative controller tuned usi...
Optimised control using Proportional-Integral-Derivative controller tuned usi...Optimised control using Proportional-Integral-Derivative controller tuned usi...
Optimised control using Proportional-Integral-Derivative controller tuned usi...IJECEIAES
 
Automated well test analysis ii using ‘well test auto’
Automated well test analysis ii using ‘well test auto’Automated well test analysis ii using ‘well test auto’
Automated well test analysis ii using ‘well test auto’Alexander Decker
 
4 combined gain scheduling and multimodel control of a reactive distillation ...
4 combined gain scheduling and multimodel control of a reactive distillation ...4 combined gain scheduling and multimodel control of a reactive distillation ...
4 combined gain scheduling and multimodel control of a reactive distillation ...nazir1988
 
Constrained discrete model predictive control of a greenhouse system temperature
Constrained discrete model predictive control of a greenhouse system temperatureConstrained discrete model predictive control of a greenhouse system temperature
Constrained discrete model predictive control of a greenhouse system temperatureIJECEIAES
 
Controlling a DC Motor through Lypaunov-like Functions and SAB Technique
Controlling a DC Motor through Lypaunov-like Functions and SAB TechniqueControlling a DC Motor through Lypaunov-like Functions and SAB Technique
Controlling a DC Motor through Lypaunov-like Functions and SAB TechniqueIJECEIAES
 
IMC Based Fractional Order Controller for Three Interacting Tank Process
IMC Based Fractional Order Controller for Three Interacting Tank ProcessIMC Based Fractional Order Controller for Three Interacting Tank Process
IMC Based Fractional Order Controller for Three Interacting Tank ProcessTELKOMNIKA JOURNAL
 
Performance analysis of a liquid column in a chemical plant by using mpc
Performance analysis of a liquid column in a chemical plant by using mpcPerformance analysis of a liquid column in a chemical plant by using mpc
Performance analysis of a liquid column in a chemical plant by using mpceSAT Publishing House
 
Performance analysis of a liquid column in a chemical plant by using mpc
Performance analysis of a liquid column in a chemical plant by using mpcPerformance analysis of a liquid column in a chemical plant by using mpc
Performance analysis of a liquid column in a chemical plant by using mpceSAT Publishing House
 
A novel auto-tuning method for fractional order PID controllers
A novel auto-tuning method for fractional order PID controllersA novel auto-tuning method for fractional order PID controllers
A novel auto-tuning method for fractional order PID controllersISA Interchange
 

Similar to singh2005.pdf (20)

12
1212
12
 
Model-based Approach of Controller Design for a FOPTD System and its Real Tim...
Model-based Approach of Controller Design for a FOPTD System and its Real Tim...Model-based Approach of Controller Design for a FOPTD System and its Real Tim...
Model-based Approach of Controller Design for a FOPTD System and its Real Tim...
 
Development of a PI Controller through an Ant Colony Optimization Algorithm A...
Development of a PI Controller through an Ant Colony Optimization Algorithm A...Development of a PI Controller through an Ant Colony Optimization Algorithm A...
Development of a PI Controller through an Ant Colony Optimization Algorithm A...
 
Model Based Embedded Control System Design for Smart Home
Model Based Embedded Control System Design for Smart HomeModel Based Embedded Control System Design for Smart Home
Model Based Embedded Control System Design for Smart Home
 
Statistical process control
Statistical process controlStatistical process control
Statistical process control
 
gonzales_wesley_ENGR3406_FINAL_PROJECT
gonzales_wesley_ENGR3406_FINAL_PROJECTgonzales_wesley_ENGR3406_FINAL_PROJECT
gonzales_wesley_ENGR3406_FINAL_PROJECT
 
Optimised control using Proportional-Integral-Derivative controller tuned usi...
Optimised control using Proportional-Integral-Derivative controller tuned usi...Optimised control using Proportional-Integral-Derivative controller tuned usi...
Optimised control using Proportional-Integral-Derivative controller tuned usi...
 
Automated well test analysis ii using ‘well test auto’
Automated well test analysis ii using ‘well test auto’Automated well test analysis ii using ‘well test auto’
Automated well test analysis ii using ‘well test auto’
 
4 combined gain scheduling and multimodel control of a reactive distillation ...
4 combined gain scheduling and multimodel control of a reactive distillation ...4 combined gain scheduling and multimodel control of a reactive distillation ...
4 combined gain scheduling and multimodel control of a reactive distillation ...
 
DEFINITIONS- CALIBRATION.pptx
DEFINITIONS- CALIBRATION.pptxDEFINITIONS- CALIBRATION.pptx
DEFINITIONS- CALIBRATION.pptx
 
Constrained discrete model predictive control of a greenhouse system temperature
Constrained discrete model predictive control of a greenhouse system temperatureConstrained discrete model predictive control of a greenhouse system temperature
Constrained discrete model predictive control of a greenhouse system temperature
 
Controlling a DC Motor through Lypaunov-like Functions and SAB Technique
Controlling a DC Motor through Lypaunov-like Functions and SAB TechniqueControlling a DC Motor through Lypaunov-like Functions and SAB Technique
Controlling a DC Motor through Lypaunov-like Functions and SAB Technique
 
At4201308314
At4201308314At4201308314
At4201308314
 
IMC Based Fractional Order Controller for Three Interacting Tank Process
IMC Based Fractional Order Controller for Three Interacting Tank ProcessIMC Based Fractional Order Controller for Three Interacting Tank Process
IMC Based Fractional Order Controller for Three Interacting Tank Process
 
Performance analysis of a liquid column in a chemical plant by using mpc
Performance analysis of a liquid column in a chemical plant by using mpcPerformance analysis of a liquid column in a chemical plant by using mpc
Performance analysis of a liquid column in a chemical plant by using mpc
 
Performance analysis of a liquid column in a chemical plant by using mpc
Performance analysis of a liquid column in a chemical plant by using mpcPerformance analysis of a liquid column in a chemical plant by using mpc
Performance analysis of a liquid column in a chemical plant by using mpc
 
593176
593176593176
593176
 
A novel auto-tuning method for fractional order PID controllers
A novel auto-tuning method for fractional order PID controllersA novel auto-tuning method for fractional order PID controllers
A novel auto-tuning method for fractional order PID controllers
 
Spc
Spc  Spc
Spc
 
Statistical process control
Statistical process controlStatistical process control
Statistical process control
 

More from karitoIsa2

More from karitoIsa2 (8)

neves2020.pdf
neves2020.pdfneves2020.pdf
neves2020.pdf
 
seppur.2005.pdf
seppur.2005.pdfseppur.2005.pdf
seppur.2005.pdf
 
chen2016.pdf
chen2016.pdfchen2016.pdf
chen2016.pdf
 
kakkar2021.pdf
kakkar2021.pdfkakkar2021.pdf
kakkar2021.pdf
 
brito2016.pdf
brito2016.pdfbrito2016.pdf
brito2016.pdf
 
barba1985.pdf
barba1985.pdfbarba1985.pdf
barba1985.pdf
 
baratti1997.pdf
baratti1997.pdfbaratti1997.pdf
baratti1997.pdf
 
errico2013.pdf
errico2013.pdferrico2013.pdf
errico2013.pdf
 

Recently uploaded

Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...anilsa9823
 
Natural Polymer Based Nanomaterials
Natural Polymer Based NanomaterialsNatural Polymer Based Nanomaterials
Natural Polymer Based NanomaterialsAArockiyaNisha
 
A relative description on Sonoporation.pdf
A relative description on Sonoporation.pdfA relative description on Sonoporation.pdf
A relative description on Sonoporation.pdfnehabiju2046
 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptxanandsmhk
 
Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)PraveenaKalaiselvan1
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSarthak Sekhar Mondal
 
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...ssifa0344
 
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |aasikanpl
 
Chemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfChemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfSumit Kumar yadav
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​kaibalyasahoo82800
 
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡anilsa9823
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRDelhi Call girls
 
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Nistarini College, Purulia (W.B) India
 
Artificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PArtificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PPRINCE C P
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhousejana861314
 
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxSOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxkessiyaTpeter
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )aarthirajkumar25
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Patrick Diehl
 

Recently uploaded (20)

Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
 
Natural Polymer Based Nanomaterials
Natural Polymer Based NanomaterialsNatural Polymer Based Nanomaterials
Natural Polymer Based Nanomaterials
 
A relative description on Sonoporation.pdf
A relative description on Sonoporation.pdfA relative description on Sonoporation.pdf
A relative description on Sonoporation.pdf
 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
 
The Philosophy of Science
The Philosophy of ScienceThe Philosophy of Science
The Philosophy of Science
 
Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
 
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
 
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
 
Chemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfChemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdf
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
 
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
 
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...
 
Artificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PArtificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C P
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhouse
 
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxSOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?
 

singh2005.pdf

  • 1. Chemical Engineering and Processing 44 (2005) 785–795 ANN based estimator for distillation—inferential control Vijander Singh∗, Indra Gupta, H.O. Gupta Electrical Engineering Department, Indian Institute of Technology Roorkee, Roorkee, Uttaranchal 247667, India Received 1 September 2003; received in revised form 11 February 2004; accepted 11 August 2004 Available online 19 November 2004 Abstract Typical production objectives in distillation process require the delivery of products whose compositions meet certain specifications. The distillation control system, therefore, must hold product compositions as near the set points as possible in the faces of upset. Distillation column is generally subjected to disturbances in the feed and the control of product quality is often achieved by maintaining a suitable tray temperature near its set point. Secondary measurements are used to adjust the values of the manipulated variables, as the controlled variables are not easily measured or not economically viable to measure (inferential control). In the present paper, an artificial neural network (ANN) based estimator to estimate composition of the distillate is proposed. Nowadays with the advent of digital computers, the demand of the time is to amalgamate the control of various variables to achieve the best results in optimum time. It is therefore required to monitor all the desired variables and perform the control action (feed forward, feed back and inferential) as per algorithm adopted. The developed estimator is tested and the results are compared. The comparison shows that the predictions made by the neural network are in good agreement with results of simulation. © 2004 Elsevier B.V. All rights reserved. Keywords: Inferential control; Distillation control system; Artificial neural network 1. Introduction The distillation control system must hold product compo- sition as near the set point(s) as far as possible in the faces of upsets. The disturbances are generally in feed. The control is difficult because the product quality cannot be measured economically on line. This is because the instrumentation is either very expensive and/or measurement lags and sam- pling delays make impossible to design an effective control system. A solution to this problem is the use of secondary measurements in conjunction with a mathematical model of the process to estimate the product quality. An estimator predicts product quality from a linear com- bination of process input and output measurements. The con- trol strategy is to use selected measurements of both process inputs and outputs to estimate the effect of measured and unmeasured disturbances on the product quality, and then ∗ Corresponding author. Tel.: +91 1332 284294; fax: +91 1332 285231. E-mail address: vijaydee@iitr.ernet.in (V. Singh). to use a standard control system to adjust the control effort so as to maintain the product quality at the desired level. This strategy reduces approximately to that of a feed forward control system when there are no measurements of process outputs. Application of the estimator to a simulated multi- component distillation column shows that the composition control achieved with an estimator based on temperature, reflux and steam flow measurements is comparable to that achieved instantaneous composition measurements. The estimated composition may be used in a control scheme to determine valve position directly, or it may be used to manipulate the set point of a temperature controller as in parallel cascade control. This is the notion behind infer- ential control developed by Joseph and Brosilow [5] (1978). The inferential control scheme uses measurements of sec- ondary outputs, in this instance, selected tray temperatures, and manipulated variables to estimate the effect of unmea- sured disturbances in the feed on product quality. The es- timated product compositions are then used in a scheme to achieve improved composition control. Use of large digital 0255-2701/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.cep.2004.08.010
  • 2. 786 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 computers for distillation calculations was not investigated up to 1958, although the high speed of computation seemed to offer economies and present the opportunity of making cal- culations not otherwise possible. Amundson and Pontinen [1] in 1958, introduced the use of digital computers to solve the distillation column problem. For general multi-component mixtures the coefficients depend in a highly non-linear fash- ion on compositions also, thus, solution becomes difficult. The solution obtained should be available for comparison and should be accurate. This is made possible with the help of large digital computer. Choe and Luyben [2] in 1987, took up rigorous dynamic model of Distillation Column. Most of the dynamic models assume two simplifications namely negligible vapor holdup and constant pressure. But in this paper it was demonstrated that these assumptions lead to erroneous predictions of dy- namic responses. It happens when pressure of column is high (i.e. greater than 10 atmosphere) and when column pressures are low (i.e. vacuum columns). In 1990, Rovaglio et al. [3] solved the distillation column problem with the help of rigorous model. Rigorous model is reliable for prac- tical purposes. An industrial example was taken to show practical implementation and real economic value of feed forward control. Feed forward control action reduces the inherent error when feedback control structure is used to infer composition. When process dead times are large and load upsets are frequent and when high quality is required feedback control cannot serve the purpose alone, then feed forward control is required to evaluate proper value of ma- nipulated variables so as to cancel the effects of input varia- tions. The control of many industrial processes is difficult be- cause online measurement of product quality is compli- cated. This is due to the non-existence of measurement technology. Weber and Brosilow in 1972 [4] cited one so- lution to this problem by using secondary measurements in conjunction with a mathematical model of the process to estimate product quality. The method includes proce- dures for selecting the available output measurement to get an estimator, which is relatively insensitive to model- ing error and measurement noise. The estimator developed for control of multi-component distillation column is based on temperature, reflux and steam flow measurements. The control achieved with the estimator is comparable to that achieved with instantaneous composition measurements and is far superior to composition control achieved by maintain- ing a constant temperature on any single stage of the col- umn. The Weber et al. [4] have designed an estimator in three steps: (1) The selection of the appropriate measurements from those available. (2) The inversion of the process model so as to obtain an estimate of the unmeasured process disturbances from the measurements. (3) Application of the process model so as to map the esti- mated and measured process inputs into the estimate of product quality. Finally, this model was tested for its validity to 16 stages distillation column. More important is to develop algorithms for selecting a subset of the available process output measure- ments, which will be most appropriate. Joseph and Brosilow [5] in 1978, presented a method for designing an estimator to infer unmeasurable product qualities from secondary mea- surements. The secondary measurements are selected so as to minimize the number of such measurements required to obtain an accurate estimate. The application of design proce- dures to design a static inferential control system to control product composition is described. Then the dynamic struc- ture of linear inferential control system term is discussed. Also the rigorous methods for the design of sub optimal dy- namic estimators are discussed. In 1991 and 1992, Marmol and Luyben [6,7] presented an inferential model based control of multi-component batch distillation. The model used is described in the paper and two approaches were explored to estimate the distillate composi- tion: a rigorous steady state estimator and a quasi-dynamic non-linear estimator. The models developed provide good estimation of the distillate composition using only one tem- perature measurement. Bhagat in 1990 [8], discussed briefly the neural networks. Two examples were taken to demon- strate their practical application, these involved CSTR’s. In the first one, the change in concentration of outlet stream with the changes in inlet stream concentration was studied. The second example involved the identification of degree of mixing in a reactor or vessel. In 1994, Morris et al. [9] examined the contribution that various network methodologies can make to the process mod- eling and control toolbox. Feed forward networks with sig- moidal activation functions, radial bases function networks and auto associative networks were reviewed and studied us- ing data from industrial processes. Finally, the concept of dynamic networks was introduced with an example of non- linear predictive control. MacMurray and Himmelblau [13] in 1994, described the modeling of packed distillation column with artificial neural network (ANN) and provide a example of complex modeling. The change in the sign of the gain was observed under various operating conditions [13]. Ou and Rhinehart [14] demonstrated a parallel model structure for general non-linear model predictive control. The model comprises of a group of sub-models, each providing predic- tion of one process at one selected future point in time. The neural network is used for each sub-model and terms the prediction model as a grouped neural network (GNN). The work demonstrates implementation of grouped neural net- work model predictive control (GNNMPC) on a non-linear, multivariable, constrained pilot scale distillation unit [14]. Tamura and Tateishi [15] have discussed the capabilities of a neural network with a finite number of hidden units and shown with the support of mathematical proof that a four-
  • 3. V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 787 layered feed forward network is superior to three layered feed forward network in terms of the number of parameters needed for the training data. Kung and Hwang [16] proposed algebraic projection analysis and provide an analytical so- lution for optimal hidden units size and learning rate of the back propagation neural networks. Murata et al. [17] have investigated the problem of determining the optimal num- ber of parameters in neural network from statistical point of view. The proposed new information criterion (NIC) therein measures the relative merits of two models having the same structure but different number of parameters and concludes whether more number of neurons should be added to the net- work or not. Kano et al. [18] presented a control scheme to control the product composition in a multi-component dis- tillation column. The distillate and bottom compositions are estimated from online measured process variables. The infer- ential models for estimation product compositions are con- structed using dynamic partial least squares (PLS) regression, on the base of simulated time series data. From the detailed dynamic simulation results, it is found that the cascade con- trol system based on a proposed dynamic (PLS) model works much better than the usual tray temperature control system. Kano et al. [19] proposed a new inferential control scheme termed as “Predictive Inferential Control”. In predictive in- ferential control system, future compositions predicted from online measured process variables are controlled instead of the estimates of current compositions. The key concept is to realize the feed back control with a feed forward effect by the use of inherent nature of a distillation column. An approach to fault detection is described by Brydon et al. [20] which uses neural network pattern classifiers trained using data from a rigorous differential equation based simula- tion of a pilot plant column. Two case studies were presented, both considering only plant data. For two classes of process data,aneuralnetworkandaK-meansclassifierbothproduced excellent diagnoses. For additional three classes of plant op- eration, a neural network again provides accurate classifica- tions, while a K-means classifier failed to categories the data [20]. Sbarbaro et al. [21] presented the traditional approach to include multi-dimensional information into conventional control systems and proposed a new structure based on pat- tern recognition. The artificial neural networks and finite state machines as a frame work for designing the control system is used. Bakshi and Stephanopoulos [22] derived a methodol- ogy for pattern based supervisory control and fault diagnosis, based on multi-scale extraction of trends from process data. An explicit mapping is learned between the features extracted at multiple scales, and the corresponding process conditions using the technique of induction by decision trees. Taking advantage of technique developed by Kolmogorov, Kurkova [23] provided a direct proof of the universal approx- imation capabilities of perceptron type network with two hid- den layers. Lippmann [24] demonstrated the computational power of different neural net models and the effectiveness of simple error correction training procedures. Single and multi layer perceptrons, which can be used for pattern clas- sification, are described as well as Kohonen’s feature map algorithm, which can be used for clustering or as a vector quantizer. 2. Simulation algorithm The realistic distillation column [12] consists of non-ideal column with NC components, non-equimolal overflow, and inefficient trays. In present paper following assumptions are made for developing the model. (1) Liquid on the tray is perfectly mixed and incompressible. (2) Tray vapor holdups are negligible. (3) Dynamics of the condenser and the reboiler is neglected. (4) Vapor and liquid are in thermal equilibrium but not in phase equilibrium. The departure from phase equilibrium is described by Murphree vapor efficiency. Under these assumptions, the steady state operation of each module is considered by the following equations, com- monly referred to as the MESH equations. [MESH = material balance equations, efficiency relations, summation equation, and heat (enthalpy) balance equations]. Here, the stage num- ber i takes integer values from 1 to NT. Li+1 + Vi−1 − Li − Vi = 0 (material balance equations) (1) yi − yi−1 = ηij[y∗ i (xi, Ti, pi) − yi−1] (stage efficiency relations) (2) where yi = vi Vi and xi = li Li Li = NC j=1 lij (summation equations) (3) Vi = NC j=1 vij (4) Li+1hi+1 + Vi−1Hi−1 − Lihi − Vihi = 0 (enthalpy balance equation) (5) Eqs. (1)–(5) are used to represent an equilibrium condenser and an equilibrium reboiler by the removal of variables corre- sponding to a liquid stream above the condenser and a vapor stream below a reboiler, and the inclusion of condenser and reboiler heat duties Qc and QB in the respective enthalpy balance equations. For the simulation of a distillation column the quantities [10], such as feed composition, flow rate, temperature and
  • 4. 788 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 pressure, column pressure, stage efficiencies are assumed to be specified. The basic steps of the algorithm reflecting the above as- sumption for the simplified multi-component distillation col- umn are: Step 1: Input data for column size, components, physical properties, feeds, and initial conditions (liquid composi- tions, liquid flow rates and temperatures on all trays). Step 2: Calculate initial tray holdups and the pressure pro- file. Step 3: Calculate the temperatures and vapor compositions from the vapor–liquid equilibrium data. Step 4: Calculate liquid and vapor enthalpies. Step 5: Calculate vapor flow rates on all trays, starting in the column base, using the algebraic form of the energy equations. Step 6: Evaluate all derivatives of the component continuity equations for all components on all trays plus the reflux drum and the column base. Step 7: Integrate all ODEs (using Euler’s method). Step 8: Calculate new total liquid holdups from the sum of the component holdups. Then calculate the new liquid mole fraction from the component holdups and the total holdups. Step 9: Calculate new liquid flow rates from the new total holdups for all trays. Step 10: Go to step 3 for the next step. The case under study is a multi-component system (Fig. 1) (five components) with constant relative volatility through- out the column and hundred percent efficient trays i.e. the vapor leaving is in equilibrium with the liquid on the tray. A single feed stream is fed as saturated liquid on to feed tray NF (NF = 5). The feed flow rate is F (kmols/h) and composition is z (mole fraction). The overhead vapor is totally condensed in a condenser and flows in to the reflux drum, whose holdup of liquid is MD (kmols). The contents of the drum is assumed to be perfectly mixed with composition xD (mole fraction). The liquid in the drum is at it’s bubble point. Reflux is pumped back to the top tray NT (NT = 15) of the column at a rate R (kmols/h). Overhead distillate product is removed at a rate D (kmols/h). At the base of the column, liquid bottoms product is removed at rate B (kmols/h) and with a composition xB (mole fraction). The vapor boilup is generated in the reboiler at rate V (kmols/h). The algorithm presented is translated into a program using C language for the distillation column discussed. The main objective of the above simulation program is to generate pat- terns. In order to vary reboiler duty QB (KJ/h) for obtaining various patterns, the following equation is used: QB = QB + ran(i) (6) where ran(i) is a random number generated using a library function srand(). The ran(i) is generated so that it ranges 0.013–0.881. The change in the reboiler duty changes the temperature profile of the column. With this changed tem- perature profile we get a changed distillate quality. In this way, 130 patterns of temperature profile and respective dis- tillate compositions are generated. These are then used for training and testing a neural network model. 3. Artificial neural network modeling 3.1. Neuron model A neuron model consists of a processing element [11] with synaptic input connections and a single output. The signal flow of neuron inputs xni is considered to be unidirectional as indicated by arrows as in a neuron’s output signal flow. A general neuron symbol is shown in Fig. 2. The neuron’s output signal is given by the following rela- tionship o = f(wt xn) or o = f n i=1 wixni (7) where w is weight vector defined as w = [w1 w2 . . . wn ] t and xn is the input vector xn = xn1 xn2 · · · xnn t The function f(wt xn) is often referred to as an activation func- tion. The variable net is defined as a scalar product of the weight and the input vector. net = wt xn (8) Using Eq. (8) in Eq. (7), we get o = f(net) (9) It is observed from Eq. (7) that the neuron as processing node performs the operation of summation of its weighted in- puts. Subsequently, it performs the non-linear operation f(net) through its activation function. Typical activation functions used are f(net) = 2 1 + exp(−λ net) − 1 (10) and f(net) = +1 · · · net 0 −1 · · · net 0 (11) where λ 0 in Eq. (10) is proportional to neuron gain deter- mining the steepness of the continuous function f(net) near net = 0. By shifting and scaling the bipolar activation function de- fined by Eqs. (10) and (11), unipolar activation function can be obtained as f(net) = 1 1 + exp(−λnet) (12)
  • 5. V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 789 Fig. 1. Schematic diagram of distillation column with instrumentation and control component.
  • 6. 790 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 Fig. 2. General symbol of neuron. Fig. 3. Single layers network with continuous perceptron. and f(net) = +1 · · · net 0 0 · · · net 0 (13) 3.2. Delta learning rule for multi-perceptron layer The back propagation-training algorithm allows experi- ential acquisition of input output mapping knowledge within multilayer networks. Input patterns are submitted during the back propagation training sequentially. If a pattern is submit- ted and its classification or association is determined to be erroneous, the synaptic weights as well as the thresholds are adjusted so that the current least mean square classification error is reduced. The input output mapping comparison of tar- get and actual values and adjustment, if needed, continue until all mapping examples from the training are learned within an acceptable over all error. During the association or classification phase the trained neural network itself operate in a feed forward manner. How- ever, the weight adjustment enforced by the learning rule propagates exactly backwards from the output layer to the hidden layer towards the input layer. To formulate the learn- ing algorithm the simple continuous perceptron network in- volving K neuron will be considered as shown in Fig. 3 . o = Γ Wyn (14) where the input and output vector and the weight matrix are yn =      yn1 yn2 . . . ynJ      o =      o1 o2 . . . oK      W =      w11 w12 · · · w1J w21 w22 · · · w2J . . . . . . . . . . . . wK1 wK2 · · · wKJ      and the non-linear diagonal operator Γ [•] is Γ [•] =      f(•) 0 · · · 0 0 f(•) · · · 0 . . . . . . . . . . . . 0 0 · · · f(•)      and the desired output vector is d =      d1 d2 . . . dK      netk = Wyn (15) The generalized error expression include all squared errors at outputs k = 1, 2, . . ., K. Ep = 1 2 K k=1 (dpk − opk)2 = 1 2 dp − op 2 (16) for a specific pattern p, where p = 1, 2, . . ., P Let us assume that the gradient decent search is performed to reduce the error Ep through the adjustment of weights. Requiring the weight adjustment we compute individual weight adjustment as follows: wkj = −η ∂E ∂wkj (17) where the error E is defined in Eq. (16) for each node in layer k, k = 1, 2, . . ., K, we can write using Eq. (15) netk = J j=1 wkjynj (18) and further using Eq. (14) the neuron’s output is ok = f(netk) (19) The error signal term δ is called delta produced by the kth neuron is defined for this layer as follows: δok = − ∂E ∂(netk) (20) It is obvious that the gradient component ∂E/∂Wkj depends only on the netk of a single neuron, since the error at the output of the kth neuron is contributed to only by the weights wkj,
  • 7. V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 791 for j = 1, 2, . . ., J for fixed k value. Thus, using the chain rule we may write ∂E ∂wkj = ∂E ∂(netk) × ∂(netk) ∂wkj (21) The second term of product of Eq. (21) in the derivative of the sum of products of weights and patterns as in Eq. (18). Since the values of ynj, for j = 1, 2, . . ., J are constant for a fixed pattern at the input, we obtain ∂(netk) ∂wkj = ynj (22) Combining Eqs. (20) and (22) leads to the following form for Eq. (21) ∂E ∂wkj = −δokynj (23) The weight adjustment formula Eq. (17) can be rewritten using the error signal δok term as below: wkj = ηδokynj for k = 1, 2, . . . , K and j = 1, 2, . . . , J (24) The expression Eq. (24) represents the general formula for delta training/learning weight adjustments for a single layer network. It can be noted that wkj in Eq. (24) does not depend upon the form of an activation function. To adapt the weights, the error signal term delta δok intro- duced in Eq. (20) needs to be computed for the kth continuous perceptron. E is a composite function of netk, therefore, it can be expressed for k = 1, 2, . . ., K E(netk) = E[ok(netk)] (25) Thus, from Eq. (20) δok = − ∂E ∂ok × ∂ok ∂(netk) (26) Denoting the second term in Eq. (26) as a derivative of acti- vation function f k(netk) = ∂ok ∂(netk) (27) and noting that ∂E ∂ok = −(dk − ok) (28) allows rewriting formula Eq. (26) as follows: δok = (dk − ok)f k(netk) for k = 1, 2, . . . , K (29) Eq. (29) shows that the error signal term δok depicts the local error (dk − ok) at the output of the kth neuron scaled by the multiplicative factorf k(netk), which is the slope of the acti- vation function computed at the following excitation value netk = f−1 (ok) (30) The final formula for the weight adjustment of the single layer network can now be obtained from Eq. (24) as wkj = η(dk − ok)f k(netk)ynj (31) The updated weight values become w kj = wkj + wkj for k = 1, 2, . . . , K j = 1, 2, . . . , J (32) Formula Eqs. (31) and (32) refers to any form of non-linear and differentiable activation function f(net) of the neuron. Let us examine the following two commonly used delta training rules for the two selected typical activation functions f(net).For the unipolar continuous activation function defined in Eq. (12) f(net) can be obtained as f (net) = exp(−net) [1 + exp(−net)]2 (33) This can be rewritten as f (net) = 1 1 + exp(−net) × 1 + exp(−net) − 1 1 + exp(−net) (34) Again using Eq. (12) in Eq. (34), we get f (net) = o(1 − o) (35) Delta value of the Eq. (29) for this activation function can be rewritten as δok = (dk − ok)ok(1 − ok) (36) Summarizing the above discussion, the updated individual weights under the delta learning rule can be expressed for k = 1, 2, . . ., K and j = 1, 2, . . ., J as follows: w kj = wkj + η(dk − ok)ok(1 − ok)ynj (37) for ok = 1 1 + exp(−netk) The updated weights under the delta learning rule for the single layer network can be expressed using vector notation as W = W + ηδoynt (38) where the error signal vector δo is defined as the column vector consisting of the individual error signal terms. δo =      δo1 δo2 . . . δoK     
  • 8. 792 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 4. Proposed ANN based estimator for distillation column The ANN model has forward flowing information in pre- dictive mode back-propagated error corrections in learning mode. Such nets are usually organized into layer of neurons; connections are made between neurons of adjacent layers. A neuron is such connected that it receives signals from each neuron in immediately succeeding layer. An input layer re- ceives input. One or more intermediate layers (also called hidden layers) lie between the input and output layer, which communicates results externally. ANN based estimator de- veloped for a distillation column assumes a mixture of NC components and NT number of trays, the column reboiler in the bottom and a condenser on the top. An estimator is pro- posed to estimate the distillate quality from the temperature profile of the column. We have NT + 2 temperature inputs for the NT trays, a reflux drum, and the reboiler. The output con- sists of NC liquid compositions and NC vapor compositions i.e. 2 × NC outputs. The estimator contains NT + 2 input neu- rons and 2 × NC output neurons. An input vector of NT + 2 elements (temperature profile of the column) is given to the input layer of the network. Weights are initially randomized when the net undergoes training the errors between the re- sults of the output neurons and the desired corresponding target values are propagated backward through the net. The backward propagation of error signals is used to up- date the connection weights. Finally, a network is achieved which can predict the output for any input vector. The input neurons transform the input signal and transmit the resulting Fig. 4. Proposed neural network for the distillation column. value to the hidden layer. Each neuron in the hidden layers individually sums the signals they receive together with the weighted signal from bias neuron and transmit the result of each of the neurons in the next layer. Ultimately, the neurons in the output layer receive weighted signals from neurons in the penultimate layer sum the signals and emit the trans- formed sums as output from the net. The output vector is composed of 2 × NC composition outputs of the distillate. The temperature profile of the trays in distillation column is highly non-linear as the system is very complex by having five-component mixture. To incorporate the non-linearities in ANN model of these patterns three hidden layers are used in the proposed estimator. Further for three hidden layers ac- ceptable accuracy is achieved and increasing the number of hidden layers beyond three no further improvement in accu- racy is achieved. Also for less than three hidden layers the accuracy is not acceptable. The trained network with three hidden layers is then used to estimate the distillate compo- sition for any given temperature profile of the distillation column. 5. Comparison of results Proposed artificial neural network based estimator is tested for 15-tray column with a reboiler and a reflux drum with five component mixture. The 20 temperature profiles and the corresponding distillate composition used for testing are the one not used in training. The results obtained with the help of ANN based estimator are compared with the re-
  • 9. V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 793 Fig. 5. Liquid composition of components with reboiler temperature. sults of simulation as obtained using semi rigorous model. The results for distillate compositions are shown in Fig. 5 and Fig. 6. As seen from Fig. 5 and Fig. 6 the estimated com- position that from proposed ANN based estimator is close to the one obtained from semi rigorous model. In Figs. 5 and 6, the composition of liquid xd5 and vapor compositions yd4 and yd5, respectively are zero in the distillate product. 6. Discussions and conclusions The distillate product of distillation control system must hold composition as near the set point(s) as far as possible in the faces of upsets. The disturbances are generally in the flow and composition of feed. The control of the product composition is difficult because the product quality cannot be measured economically on line. This is because the in- strumentation is either infeasible and/or measurement lags and sampling delays make impossible to design an effective control system. This problem is solved by using of secondary measurements in conjunction with a mathematical model of the process to estimate the product quality. An artificial neural network based estimator developed here can be used for the inferential control of distillation column. The developed es- timator control strategy with minimal computational burden and high speed can be proposed for the distillation control system, which is generally non-linear in nature. As for simulation study program discussed a 15-tray col- umn with a reboiler and a reflux drum with five-component mixture is considered for testing the estimator. One hundred and thirty input-output patterns are generated using simula- tionprogramandareusedfortrainingthedevelopedestimator of Fig. 4. Out of the above-generated patterns some of them are used for testing purpose. Temperature profile taken as in- put vector consisted of 17 temperature entries of 15 trays, reboiler and reflux drum. The output vector of the estimator is constituted by five liquid and five vapor distillate compo- sitions for the mixture considered. Also the estimator’s input vector consisted of 17 elements and output vector had 10 ele- ments. A 5-layered network model is taken with [17, 10, 35, 35, 35] configuration i.e. 17 input neurons, 10 output neurons and 35 neurons in each of the three hidden layers. The net- work is trained using 110 patterns and 20 test inputs are given for testing. Training the estimator took about 60,000 × 110 iterations and about 45 h. It is observed on 1.2 GHz, Intel Pentium-IV processor, that developed simulation program takes 0.16 s for its execution and developed ANN based estimator takes 0.05 s for the same Fig. 6. Vapor composition of components with reboiler temperature.
  • 10. 794 V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 process, thus, the total time saving of 68.75% can be achieved using ANN model, without sacrificing the accuracy. Appendix A. Nomenclature f(net) activation function Γ [•] a non-linear diagonal operator δo error signal vector δok error signal vector produced by kth neuron δyj error signal term produced by jth neuron of hidden layer having output y v weight increment for hidden layer of neurons w weight increment for input layer of neurons f y column vector for hidden layers ηi v vaporization efficiency η learning parameter (positive constant) E error gradient vector ηij Murphree stage efficiency B bottom product rate (kmols/h) d desired output vector dp desired output vector for pth pattern di desired output from ith neuron dpk desired output from kth neuron for pth pattern D distillate product rate (kmols/h) Ep least squared error for pth pattern Fi total feed flow rate into ith tray (kmols/h) hF total molar enthalpy of feed (kJ/kmol) hfij component feed enthalpy (kJ/kmol) hi total molar enthalpy of liquid mixture (kJ/kmol) Hi total molar enthalpy of vapor (kJ/kmol) hlij component liquid enthalpy (kJ/kmol) HNi,j hidden neuron for ith hidden layer and jth node Hvij component vapor enthalpy (kJ/kmol) INB input neuron for reboiler temperature IND input neuron for reflux drum temperature INI input neuron for ith tray temperature K, L, M number of neurons in three hidden layers respec- tively Kij equilibrium constant Li total liquid flow rate leaving the tray (kmols/h) lij component liquid flow rate leaving the ith tray (kmols/h) MB liquid molar holdup in reboiler (kmols) MD liquid molar in reflux drum Mi liquid molar holdup on ith tray (kmols) NC number of components net scalar product of weight vector and input vector netI scalar product of ith weight vector and input vector NT total number of trays in distillation column O output vector of neuron Ok kth output of neurons processing node ONi output neuron for ith output QB reboiler heat duty (KJ/h) QC condenser heat duty (KJ/h) R reflux rate (kmols/h) vn updated weights of hidden layer vnij connection weights of ith node of one layer to jth node of preceding layer vn weight vector of hidden layer V weight matrix of hidden layer Vi total vapor flow rate from the tray (kmols/h) vij component vapor flow rate from the tray (kmols/h) w multiplicative weight vector wi multiplicative weight for ith input w updated weights of input layer wij multiplicative weights for input to ith neuron from jth input element W weight matrix x liquid composition of more volatile component (mole fraction) xFij component liquid composition of jth component in feed (mole fraction) xij liquid composition if jth component on ith tray (mole fraction) xn input vector to neuron xni ith input to neuron y vapor composition of more volatile component (mole fraction) y* equilibrium vapor composition of more volatile component (mole fraction) yij vapor composition of jth component on ith tray (mole fraction) yij * equilibrium vapor composition of jth component on ith tray (mole fraction) yn input vector to neuron layer References [1] N.R. Amundson, A.J. Pontinen, Multicomponent distillation calcu- lations on a large digital computer, Ind. Eng. Chem. 50 (5) (1958) 730–736. [2] Y.-S. Choe, W.L. Luyben, Rigorous dynamic models of distillation columns, Ind. Eng. Chem. Res. 26 (10) (1987) 2158–2161. [3] M. Rovaglio, E. Ranzi, G. Biardi, M. Fontana, R. Domenichini, Rigorous dynamic and feed forward control design for distillation process, AIChE J. 36 (4) (1990) 576–586. [4] R. Weber, C. Brosilow, The use of secondary measurements to im- prove control, AIChE J. 18 (3) (1972) 614–627. [5] B. Joseph, C.B. Brosilow, Inferential control of process. Part I: Steady state analysis and design. Part 2: The structure and dy- namics of inferential control systems. Part 3: Construction of suboptimal dynamic estimators, AIChE J. 24 (3) (1978) 485– 509. [6] E.Q. Marmol, W.L. Luyben, C. Geogarkis, Application of an extended luenberger observer to the control of multi-component batch distillation, Ind. Eng. Chem. Res. 30 (8) (1991) 1870– 1880. [7] E.Q. Marmol, W.L. Luyben, Inferential model based control of multi-component batch distillation, Chem. Eng. Sci. 47 (1992) 887– 898. [8] P. Bhagat, An introduction to neural nets, Chem. Eng. Prog. (1990) 55–60.
  • 11. V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 795 [9] A.J. Morris, G.A. Montague, M.J. Willis, Artificial neural networks: studies in process modeling and control, Trans. I Chem. E 72 (Part A) (1994) 3–19. [10] L.W. Luyben, Process Modeling, Simulation and Control for Chem- ical Engineers, Mcgraw Hill International Editions, Chemical Engi- neering Series. [11] J.M. Zurada, Introduction to Artificial Neural Systems, Jaico Pub- lishing House. [12] P.B. Deshpande, Distillation Dynamics and Control, Instrument So- ciety of America, Tata McGraw Hill Publishing Co. Ltd. [13] J.C. MacMurray, D.M. Himmelblau, Modeling and control of a packed distillation column using artificial neural networks, Comput. Chem. Eng. 19 (10) (1995) 1088. [14] J. Ou, R.R. Rhinehart, Grouped neural network model predictive control, Control Eng. Pract. 11 (2003) 723–732. [15] S. Tamura, M. Tateishi, Capabilities of a four layered feed forward neural network: four layers versus three, IEEE Trans. Neural Net- works 8 (2) (1997) 251–255. [16] S.Y. Kung, J.N. Hwang, An Algebraic Projection Analysis for Op- timal Hidden Units Size and Learning Rates in Back Propagation Learning, Princeton University, Department of Electrical Engineer- ing, Princeton, NJ 08544, U.S.A. [17] N. Murata, S. Yoshizawa, S. Amari, Network information criterion- determining the number of hidden units foe an artificial neural network model, IEEE Trans. Neural Networks 5 (6) (1994) 865– 872. [18] M. Kano, N. Showchaiya, S. Hasebe, I. Hashimoto, Inferential control system of distillation composition using dynamic par- tial least squares regression, J. Process Control 10 (2000) 157– 166. [19] M. Kano, N. Showchaiya, S. Hasebe, I. Hashimoto, Inferen- tial Control of Distillation Composition: Selection of Model and Control Configuration, Control Eng. Pract. 11 (8) (2003) 927– 933. [20] D.A. Brydon, J.J. Cilliers, M.J. Willis, Classifying pilot plant dis- tillation column faults using neural networks, Control Eng. Pract. 5 (10) (1997) 1373–1384. [21] D. Sbarbaro, P. Espinoza, J. Araneda, A pattern based strategy for using multidimensional sensors in process control, Comput. Chem. Eng. 27 (2003) 1943. [22] B.R. Bakshi, G. Stephanopoulos, Representation of process trends. IV. Introduction of real time patterns from operating data for diag- nosis and supervisory control, Comput. Chem. Eng. 18 (4) (1994) 303–332. [23] V. Kurkova, Kolmogorov’s Theorem and multi layer neural networks, Neural Networks 5 (1992) 501–506. [24] R.P. Lippmann, Neural Nets for Computing, Lincola Laboratory, M.I.T, Lexington, MA 02173, U.S.A.