This document presents an approach for solving nonlinear programming problems using measure theory. It begins by transforming a nonlinear programming problem into an optimal control problem by treating the variables as time-varying and integrating the objective and constraint functions. It then solves the optimal control problem using measure theory by representing the control-trajectory pair as a positive Radon measure on the space of trajectories and controls. Finally, it shows that the optimal solution to the transformed optimal control problem provides an approximate optimal solution to the original nonlinear programming problem.
The Benefits and Challenges of Open Educational Resources
An Approach For Solving Nonlinear Programming Problems
1. Korean J. Comput. & Appl. Math. Vol. 9(2002), No. 2, pp. 547 - 560
AN APPROACH FOR SOLVING NONLINEAR
PROGRAMMING PROBLEMS
H. Basirzadeh, A. V. Kamyad and S. Effati
Abstract. In this paper we use measure theory to solve a wide range
of the nonlinear programming problems. First, we transform a nonlinear
programming problem to a classical optimal control problem with no re-
striction on states and controls. The new problem is modified into one con-
sisting of the minimization of a special linear functional over a set of Radon
measures; then we obtain an optimal measure corresponding to functional
problem which is then approximated by a finite combination of atomic
measures and the problem converted approximately to a finite-dimensional
linear programming. Then by the solution of the linear programming prob-
lem we obtain the approximate optimal control and then, by the solution
of the latter problem we obtain an approximate solution for the original
problem. Furthermore, we obtain the path from the initial point to the
admissible solution.
AMS Mathematics Subject Classification : 49J15, 49M37, 90C30, 93C95.
Key words and phrases : Measure theory, optimal control, nonlinear pro-
gramming.
1. Introduction
A class of methods which solves the nonlinear programming problems by
moving from a feasible point to an improved feasible point is typical of fea-
sible directions algorithms, for example the Gradient projection method, the
method of Reduced Gradient and the Generalized Reduced Gradient method
(see [1],[2],[19]). The method was later generalized by Abdie and Carpentier
(see [19]) to handle nonlinear constraints. There are two alternative approaches,
the first is called the penalty or the exterior penalty function method, the sec-
ond is called the barrier or interior penalty function method. Such an approach
is sometimes referred to as a sequential unconstrained minimization technique.
Many algorithms solve a constrained problem by converting it into a sequence
Received April 2, 2001. Revised June 27, 2001.
c 2002 Korean Society for Computational & Applied Mathematics and Korean SIGCAM.
547
2. 548 H. Basirzadeh, A. V. Kamyad and S. Effati
of uncostrained problems via Lagrangian multipliers or via penalty and bar-
rier functions, furtheremore, most methods proceed by finding a direction and
then minimizing along this direction (see [2],[10],[19]). Now, we want to use a
new method for solving the nonlinear programming problems by using measure
theory which is a method has been used for solving optimal control problems,
replacing the classical problem by ones in measure spaces, for example Rubio
(see [15]) applied this idea to obtain the global control of nonlinear diffusion
equations, also see Wilson et al. [18], Rubio [14], Kamyad et al. [11]-[13], Farahi
et al. [9]. Measure theory has recently been used to solve optimal shape design
by Fakharzadeh and Rubio [7],[8], and to solve linear and nonlinear ODE’s and
infinite-horizon optimal control problems by Effati et al. [4]-[6]. This method is
not an iterative method and it does not befall in zigzag case, so we obtain an
approximate optimal solution in a straightforward manner. Let us consider a
nonlinear programming problem in the following form:
Minimize
(1) f(x1, x2, · · · , xn)
subject to:
(2)
gi(x1, x2, · · · , xn) = 0, i = 1, 2, · · · , m
hp(x1, x2, · · · , xn) ≤ 0, p = 1, 2, · · · , r
x = (x1, x2, · · · , xn) ∈ A ⊂ Rn
where f and gi, i = 1, 2, · · · , m are nonlinear functions defined in C′
(A0
), the
space of real-valued differentiable functions on A0
, where A0
is the interior of the
compact set A, and hp, p = 1, 2, · · · , r are nonlinear functions defined in C(A),
the space of all real-valued continuous functions on A. We try to determine from
among the set of all solutions which satisfy the equality constraints gi(x) = 0
for i = 1, 2, · · · , m and the inequality constraints hp(x) ≤ 0, p = 1, 2, · · · , r that
particular solution (or several solutions, since the solution may not be unique )
as x∗
which yields a minimum value of f(x). First of all we try to transform
the nonlinear programming problem into an optimal control problem. Then we
solve the latter optimal control problem by using measure theory.
2. Transformation
For transforming problem (1)-(2) into an optimal control problem, first we
assume x = (x1, x2, · · · , xn) be a time varying vector that is
x(t) = (x1(t), x2(t), · · · , xn(t)).
By differentiation of the function f we have
(3)
df(x(t))
dt
= ∇f(x(t)).ẋ(t)
3. An approach for solving nonlinear programming problems 549
where
∇f(x(t)) = (
∂f(x(t))
∂x1
, · · · ,
∂f(x(t))
∂xn
),
dx
dt
≡ ẋ(t) = (ẋ1(t), ẋ2(t), · · · , ẋn(t)).
By integrating of (3) in the interval [0, T], where T is a known positive real
number and arbitrary, we have
f(x(T)) − f(x(0)) =
Z T
0
df(x(t))
dt
dt =
Z T
0
∇f(x(t)).ẋ(t)dt,
or
(4) f(x(T)) = f(x(0)) +
Z T
0
∇f(x(t)).ẋ(t)dt,
where f(x(0)) is constant. We assume g = (g1, g2, · · · , gm) and define
(5) ||g(x)||2
2 =
m
X
i=1
g2
i (x(t)), t ∈ J = [0, T].
By differentiation of (5) with respect to t we have
(6)
d
dt
||g(x)||2
2 =
n
X
j=1
m
X
i=1
2gi
∂gi(x(t))
∂xj(t)
.ẋj(t).
Define
G(x(t)) = (2
m
X
i=1
gi
∂gi
∂x1
, · · · , 2
m
X
i=1
gi
∂gi
∂xn
).
Thus (6) becomes:
(7)
d
dt
||g(x)||2
2 = G(x(t)).ẋ(t),
where the right hand side of (7) is inner product. By integrating of (7), we have
||g(x(T))||2
2 − ||g(x(0))||2
2 =
Z T
0
G(x(t)).ẋ(t)dt,
or
(8) ||g(x(T))||2
2 =
m
X
i=1
g2
i (x(T)) = ||g(x(0))||2
2 +
Z T
0
G(x(t)).ẋ(t)dt,
where ||g(x(0))||2
2 is constant and x(0) is an initial point.
Lemma 1. x(T) is a feasible solution for (2) if and only if
hp(x(T)) ≤ 0, p = 1, · · · , r
||g(x(T))||2
2 = 0.
4. 550 H. Basirzadeh, A. V. Kamyad and S. Effati
Proof. Suppose that x(T) is a feasible solution for (2), thus we have
gi(x1(T), x2(T), · · · , xn(T)) = 0, i = 1, 2, · · · , m
hp(x1(T), x2(T), · · · , xn(T)) ≤ 0, p = 1, 2, · · · , r
or
g2
i (x(T)) = 0, i = 1, 2, · · · , m
hp(x(T)) ≤ 0, p = 1, · · · , r
This implies that ||g(x(T))||2
2 =
Pm
i=1 g2
i (x(T)) = 0, conversely, it is clear that if
hp(x(T)) ≤ 0, p = 1, · · · , r
||g(x(T))||2
2 = 0,
then x(T) is a feasible solution for (2).
Lemma 2. x(T) is an optimal solution for (1)-(2) if and only if x(T) be a
feasible solution for (2) and
f(x(T)) ≤ f(x(t)), t ∈ [0, T].
Proof. If x(T) be a feasible solution for (2) and f(x(T)) ≤ f(x(t)), t ∈ [0, T],
it is obvious that x(T) is an optimal solution for (1)-(2), conversely, if x(T)
be an optimal solution for (1)-(2), then x(T) is a feasible solution for (2) and
f(x(T)) ≤ f(x(t)), t ∈ [0, T].
Lemma 3. A necessary and sufficient condition for holding the inequality
h(x(t)) ≤ 0, t ∈ [0, T] is as follows:
Z T
0
|h(x(t)) + |h(x(t))||dt = 0.
Proof. Assume that h(x(t)) ≤ 0, t ∈ [0, T] thus we have |h(x(t))| = −h(x(t)) or
|h(x(t)) + |h(x(t))|| = 0, therefore,
Z T
0
|h(x(t)) + |h(x(t))||dt = 0.
Conversely, since the integrand function is continuous and non-negative so |h(x(t))+
|h(x(t))|| = 0 or |h(x(t))| = −h(x(t)) which implies that h(x(t)) ≤ 0, t ∈
[0, T].
Thus the inequality constraints hp(x(t)) ≤ 0, p = 1, 2, · · · , r can be written
as follows:
(9)
Z T
0
|hp(x(t)) + |hp(x(t))||dt = 0, p = 1, 2, · · · , r.
5. An approach for solving nonlinear programming problems 551
Here, we assume ẋ(t) = u(t) which is a Lebesgue-measurable function on [0, T]
and we call u(·) as an artificial control function. Thus, we can convert the non-
linear programming problem (1)-(2) into an optimal control problem as follows:
Minimize
(10) f(x(T)) = f(x(0)) +
Z T
0
∇f(x(t)).ẋ(t)dt,
subject to
ẋ(t) = u(t)
||g(x(T))||2
2 = 0
R T
0 |hp(x(t)) + |hp(x(t))||dt = 0, p = 1, 2, · · · , r
x(0) = x0, x(T) is an optimal solution for (1)-(2).
(11)
Let Ω = J × A × U, where the trajectory function t → x(t) : J → A is
absolutely continuous and the control function t → u(t) : J → U is Lebesgue-
measurable.
Definition 1. A pair w = [x(·), u(·)] is said to be an admissible control-
trajectory for (1)-(2) if the following conditions hold:
(i) x(t) ∈ A, t ∈ J = [0, T].
(ii) u(t) ∈ U, t ∈ J, U is compact.
(iii) The pair w satisfies (11) a.e. on J0
.
Now, by Lemma 1, Lemma 2, Lemma 3 and Definition 1, we can show the
following theorem.
Theorem 1. The optimal solution of problem (10)-(11), w∗
= [x∗
(·), u∗
(·)], is
an admissible optimal control-trajectory for problem (1)-(2), where x∗
(T) is the
optimal solution of the original problem (1)-(2).
In the next section we shall analyse classical bounded control problems.
3. Analysis of the optimal control problems
We assume that the set of all admissibe pairs is non-empty and denote it by W.
Let w = [x(·), u(·)] be an admissible pair, and B an open ball in I
Rn+1
containing
J × A; and C′
(B) be the space of all real-valued continuously differentiable
functions on it. Let φ ∈ C′
(B), and define function φu
as follows:
(12) φu
(t, x(t), u(t)) = φx(t, x(t)).u(t) + φt(t, x(t))
for all (t, x(t), u(t)) ∈ Ω, note that both φx and u are two n-vectors, and the
first term in the right-hand side of (12) is inner product of φx and u, and
φx = ( ∂φ
∂x1
, ∂φ
∂x2
· · · , ∂φ
∂xn
)t
. The function φu
is in the space C(Ω) on the compact
6. 552 H. Basirzadeh, A. V. Kamyad and S. Effati
set Ω. Since w = [x(·), u(·)] is an admissible pair, we have
Z T
0
φu
(t, x(t), u(t))dt =
Z T
0
(φx(t, x(t))).ẋ(t) + φt(t, x(t))) dt
(13) =
Z T
0
φ̇(t, x(t))dt = φ(T, x(T)) − φ(0, x(0)) = δφ,
for all φ ∈ C′
(B).
Let D(J0
) be the space of all infinitely differentiable real-valued functions with
compact support in J0
(see [3] and [15]). Define
(14) ψj(t, x(t), u(t)) = xj(t)ψ′
(t) + uj(t)ψ(t),
for j = 1, · · · , n, and all ψ ∈ D(J0
), where uj is the jth component of the
control function u. Then, if w = [x(·), u(·)] be an admissible pair, we have, for
j = 1, 2, · · · , n and ψ ∈ D(J0
),
Z T
0
ψj(t, x(t), u(t))dt =
Z T
0
xj(t)ψ′
(t)dt +
Z T
0
uj(t)ψ(t)dt
= xj(t)ψ(t)|J −
Z T
0
(ẋj(t) − uj(t)) ψ(t)dt = 0,
since the function ψ has compact support in J0
, ψ(0) = ψ(T) = 0. With the
choice of functions which depend only on the time variable, we have
Z T
0
k(t, x(t), u(t))dt = ak, k ∈ C1(Ω),
where C1(Ω) is subspace of the space C(Ω), of the continuous functions depends
only on the time variable t. Now consider:
(1) The mapping
Λw : F →
Z
J
F (t, x(t), u(t))dt, F ∈ C(Ω),
defines a positive linear functional on C(Ω).
(2) By the Riesz representation theorem (see [3],[15]), there exists a unique
positive Radon measure µ on Ω such that
Λw(F ) =
Z
J
F (t, x(t), u(t))dt =
Z
Ω
F dµ ≡ µ(F ), F ∈ C(Ω).
Thus, the minimization of the functional (10) over constraints (11) is equiv-
alent to the minimization of
(15) I[w] = Λw(∇f.u) + f(x(0))
7. An approach for solving nonlinear programming problems 553
subject to
(16)
Λw(φu
) = δφ, φ ∈ C′
(B)
Λw(ψj) = 0, j = 1, 2, · · · , n, ψ ∈ D(J0
)
Λw(k) = ak, k ∈ C1(Ω)
Λw(G.u) + ||g(x(0))||2
2 = 0
Λw(|hp + |hp||) = 0, p = 1, 2, · · · , r.
Now, suppose that the space of all positive Radon measures on Ω will be denoted
by M+
(Ω). By the Riesz representation theorem, the positive linear functionals
above will be replaced by their representing measures, so we seek a measure in
M+
(Ω), to be normally denoted by µ∗
which minimizes the functional (15).
Thus, the minimization of the functional I in (15) over W is equivalent to the
minimization of
(17) I[µ] =
Z
Ω
∇f.udµ + f(x(0)) ≡ µ(∇f.u) + f(x(0)) ∈ I
R
over the set of all positive measures µ corresponding to admissible pairs w, which
satisfy
(18)
µ(φu
) = δφ, φ ∈ C′
(B)
µ(ψj) = 0, j = 1, 2, · · · , n, ψ ∈ D(J0
)
µ(k) = ak, k ∈ C1(Ω)
µ(G.u) + ||g(x(0))||2
2 = 0
µ(|hp + |hp||) = 0, p = 1, 2, · · · , r.
We shall consider the minimization of (17) over the set Q of all positive Radon
measures on Ω satisfying (18). Now if we ’topologize’ the space M+
(Ω) by the
weak*-topology, it can be seen (see [15]) that Q is compact. The functional
I : Q → I
R, defined by
I[µ] =
Z
Ω
∇f.udµ + f(x(0)) ≡ µ(∇f.u) + f(x(0)) ∈ I
R, µ ∈ Q
is a linear continuous functional on a compact set Q; so attains its minimum
on Q (see [16]), thus, the measure-theoretical problem, which consists of finding
the minimum of the functional (17) over the subset Q of M+
(Ω), possesses a
minimizing solution µ∗
, say, in Q.
4. Approximation
For the estimation by the nearly optimal piecewise constant control, consider
the minimization of the functional (17) not over the the set Q but over a sub-
set of M+
(Ω) which is defined by requiring that only a finite number of the
constraints in (18) to be satisfied. This will be achieved by choosing count-
able sets of functions whose linear combinations are dense in the appropriate
spaces, and then selecting a finite number of them. In the first step, we ob-
tain an approximation of the optimal measure µ∗
by a finite combination of
8. 554 H. Basirzadeh, A. V. Kamyad and S. Effati
atomic measures, that is, from the Theorem ( see [14], appendix,Theorem A.5),
µ∗
has the form µ∗
=
PN
i=1 α∗
i δ(z∗
i ) where α∗
i ≥ 0 and z∗
i ∈ Ω for i=1,2,· · ·,N
(here δ(z) is a unitary atomic measure, characterized by δ(z)(F ) = F (z) where
F ∈ C(Ω) and z ∈ Ω). Then, we construct a piecewise- constant control func-
tion corresponding to the finite-dimensional problem ( see [14]). Therefore in the
infinite-dimensional linear programming problem (17) with restrictions defined
by (18), we shall consider only a finite number M1 of functions φ of the type
φ1 = x1, φ2 = x2, φ3 = x3, · · · φn = xn, φn+1 = x2
1, φn+2 = x2
2 · · ·. Also, only
a finite number of functions χh, h=1,2,· · ·,M2 that before defined in (14), when
the functions ψ are considered as sin(2πrt
T
), 1 − cos(2πrt
T
), r=1,2,· · ·, and also,
only a finite number L of functions k of the type
ks(t) =
1 if t ∈ Js
0 otherwise
will be considered, where Js = ((s − 1)d, sd), and d = T
L , s = 1, · · · , L. The set
Ω = J ×A×U is covered by a partition, where the partition is defined by taking
all points in Ω as zj = (tj, x1j , x2j , · · · , xnj , u1j , u2j , · · · , unj ). Of course, we
only need to construct the control function u(·), since the trajectory is then sim-
ply the corresponding solution of the differential equation (11), with condition
x(0) = x0, which can be estimated numerically. The infinite-dimensional linear
programming problem (17) with restriction defined by (18) can be approximated
by the following problem, which zj for j = 1, · · · , N belong to an approximately
dense subset of Ω.
Minimize
(19)
N
X
j=1
αj(∇f.u)(zj) + f(x(0))
Subject to
(20)
PN
j=1 αjφu
i (zj) = δφi, i = 1, · · · , M1
PN
j=1 αjχh(zj) = 0, h = 1, · · · , M2
PN
j=1 αjks(tj) = ak, s = 1, · · · , L
PN
j=1 αj(G.u)(zj) + ||f(x(0))||2
2 = 0
PN
j=1 αj(|hp(xj) + |hp(xj)||) = 0, p = 1, 2, · · · , r
αj ≥ 0 , j = 1, · · · , N.
Note that the elements zj, j = 1, 2, · · · , N are fixed, and the only unknowns are
the numbers αj, j = 1, 2, · · · , N.
The procedure to construct a piecewise constant control function approximating
the action of the optimal measure is based on the analysis [14].
9. An approach for solving nonlinear programming problems 555
Note. We note that the variables of the original problem, which is a nonlinear
programming problem, do not depend on time. But we transform the problem
into an optimal control problem with artificial states and controls, in which
the variables of the original problem change to states and control functions, and
obviously they are depend on time. As we know in control theory, the states and
controls are usually restricted to be in some compact sets like A and U, where
A ⊂ Rn
, U ⊂ Rn
. But here, our new problem, contains artificial states and
artificial control functions which their values may be in some arbitrary compact
sets that we can choose them as larg as we need.
5. Numerical examples
Some numerical examples are considered below to illustrate the procedure.
Example 5.1. Consider the following nonlinear programming problem:
Minimize
(21) f(x) = x2
1 + x2
2
subject to:
(22) g(x) = (x1 − 1)3
− x2
2 = 0
with initial point x(0) = (0.5, 0.25), we try to obtain x(1) = (x1(1), x2(1)) which
is the approximate solution for (21)-(22).
First we define the function
||g(x)||2
2 = g2
(x).
Now,
∇||g(x)||2
2 = G(x) = (2g(x)
∂g(x)
∂x1
, 2g(x)
∂g(x)
∂x2
),
thus we have:
Minimize
I(∇f.u) =
Z 1
0
2(x1.u1 + x2.u2)dt + f(x(0))
subject to
ẋ1(t) = u1(t)
ẋ2(t) = u2(t)
Z 1
0
G(x(t)).u(t)dt + 0.140625 = 0
where x1(0) = 0.5, x2(0) = 0.25, and x(1) = (x1(1), x2(1)) is solution of the
nonlinear programming problem (21)-(22).
Let t ∈ J = [0, 1], and x(t) = (x1(t), x2(t)) ∈ A = A1 ×A2, where A1 = [0.5, 1.5],
A2 = [−0.2, 0.8] and u = (u1, u2) ∈ U = U1 × U2, U1 = [0.5, 2.5], U2 = [−0.5, 0].
10. 556 H. Basirzadeh, A. V. Kamyad and S. Effati
Let the set J = [0, 1] is divided into 10 subintervals, the sets A1 and A2 both
are divided into 6 subintervals, and the sets U1 and U2 both are divided into 5
subintervals, so that Ω = J × A × U is divided into 9000 subintervals. Now if
M1 = 2, M2 = 8, L = 10, then we have a linear programming problem as follows:
Minimize
9000
X
j=1
2αj(x1j u1j + x2j u2j ) + f(x(0))
subject to
x1(1) −
9000
X
j=1
αju1j = 0.5
x2(1) −
9000
X
j=1
αju2j = 0.25
9000
X
j=1
αj{2πhxlj cos(2πhtj) + ulj sin(2πhtj)} = 0, l = 1, 2, h = 1, 2
9000
X
j=1
αj{2πhxlj sin(2πhtj) + ulj (1 − cos(2πhtj))} = 0, l = 1, 2, h = 1, 2
9000
X
j=1
αj{6((x1j − 1)3
− x2
2j
)(1 − x1j )2
u1j − u2j (−4x2j ((x1j − 1)3
− x2
2j
))}−
β1 = 0.140625, (β1 is the slack variable).
α1+900(i−1) + · · · + α900+900(i−1) =
1
10
, i = 1, · · · , 10.
Fig. 1. Piecewise constant control. Fig. 2. Approximate solution.
11. An approach for solving nonlinear programming problems 557
The approximate solution of the problem (21)-(22) is x(1) = (1, 0.0013) and
the optimal value of the cost function f in T = 1 is 1 (see [19]).
The graphs of the piecewise constant control functions and the trajectory func-
tions are shown in figures 1-5, respectively.
Example 5.2. Consider the following nonlinear programming problem with
initial point x(0) = (0.2, 0.2): Minimize
(23) f(x1, x2) = (x1 − 2)2
+ (x2 − 2)2
subject to:
(24)
g(x1, x2) = x2
1 + x2
2 = 1
h(x1, x2) = x2
2 − x1 ≤ 0
First we define ||g(x)||2
2 = (x2
1 + x2
2 − 1)2
, thus, we have:
Minimize
I(∇f.u) =
Z 1
0
2((x1 − 2).u1 + (x2 − 2).u2)dt + f(x(0))
Subject to
ẋ1(t) = u1(t)
ẋ2(t) = u2(t)
Z 1
0
G(x(t)).u(t)dt + 0.8464 = 0
where x1(0) = 0.2, x2(0) = 0.2. Let t ∈ J = [0, 1], and x(t) = [x1(t), x2(t)] ∈
A = A1 × A2 = [0, 1] × [0, 1] and u = (u1, u2) ∈ U = U1 × U2 = [−0.5, 0.5] ×
[−0.5, 0.5].
Fig. 3. Piecewise constant control. Fig. 4. Approximate solution.
12. 558 H. Basirzadeh, A. V. Kamyad and S. Effati
Fig. 5. Trajectory of solution from x0 = (0.5, 0.25) to x(1) = (1, 0.0013)
Let the set J is divided into 6 subintervals, the sets A1 and A2 both are
divided into 6 subintervals, and the sets U1 and U2 both are divided into 5
subintervals, so that Ω = J × A × U is divided into 9000 subintervals.
We assume M1 = 2, M2 = 8, L = 10. Then we solve a linear programming as
Example 5.1, which the approximate solution of the problem (23)-(24) will be
x(1) = (0.7070, 0.7070) and the optimal value of the cost function f in T = 1 is
3.3437. The graphs of the piecewise constant control functions and the trajectory
functions are shown in figures 6-10, respectively.
Fig. 6. Piecewise constant control. Fig. 7. Approximate solution.
It is an interesting subject, we used this method for several examples which
two of them have shown in the above, that we can see the paths from initial
point to the approximate solution (the final values of the paths ) are monotone
13. An approach for solving nonlinear programming problems 559
Fig. 8. Piecewise constant control. Fig. 9. Approximate solution.
Fig. 10. Trajectory of solution from x0 = (0.2, 0.2) to x(T) = (0.7070, 0.7070).
paths. And also We emphasize that ,in this paper, T is known and arbitrary
which we can set T = 1.
References
1. M. Bazarra: Linear Programming and network flows. John Wiley and Sons. New York 1992.
2. M. Bazarra, H. D. Sherali and C. M. Shetty: Nonlinear Programming; Theory and Algo-
rithms. John Wiley and Sons. New York 1993.
3. G. Choquet: Lectures on Analysis. Benjamin. New York 1969.
4. S. Effati and A. V. Kamyad: Solution of Boundary Value Problems for Linear Second
Order ODE’s by Using Measure theory. J. Analysis 6 (1998), 139-149.
5. S. Effati and A. V. Kamyad, R. A. Kamyabi-Gol: On Infinite-Horizon Optimal Control
Problems . J. Z. Anal. Anw. 19 (2000), 269-278.
6. S. Effati and A. V. Kamyad: A New Method for Solving the Nonlinear Second Order
Boundary Value Differensial Equations. Korian J. Comput. Appl. Math. 7 (2000), 183-
193.
14. 560 H. Basirzadeh, A. V. Kamyad and S. Effati
7. A. Fakharzadeh, J. E. Rubio: Shapes and Measures . Ima. J. Math. Control Information
16 (1999), 207-220.
8. A. Fakharzadeh, J. E. Rubio: Global Solution of Optimal Shape Design Problems . J. Z.
Anal. Anw. 18 (1999), 143-155.
9. M. H. Farahi, J. E. Rubio and D. A. Wilson: The Optimal Control of the Linear Wave
Equation. Int. J. Control 63 (1995), 833-848.
10. B. S. Goh: Algorithms for Unconstrainted Optimization Problem Via Control Theory.
Jota. March 1997.
11. A. V. Kamyad, J. E. Rubio and D. A. Wilson: Optimal Control of Multidimensional
Diffusion Equation. J. Optim. Theory Appl. 70 (1991), 191-209.
12. A. V. Kamyad, J. E. Rubio and D. A. Wilson: Optimal Control of Multidimensional
Diffusion Equation with a Generalized Control Varible. J. Optim. Theory Appl. 75 (1992),
101-132.
13. A. V. Kamyad: Strong Controllability of the Diffusion Equation in n−Dimensions. Bul-
letin Iranian Math. Society 18 (1992), 39-49.
14. J. E. Rubio: Control and Optimization. the Linear Treatment of Non-Linear Problems.
Manchester (U. K.). Univ. Press 1986.
15. J. E. Rubio: The Global Control of Nonlinear Elliptic Equations. J. Franklin Institue 330
(1993), 29-35.
16. W. Rudin: Real and Complex Analysis. Madison (USA): Math. Univ. Wisconsin 1966.
17. F. Treves: Topological Vector Spaces; Distributions and Kernels . New York and London.
Academic Press 1967.
18. D. A. Wilson, and J. E. Rubio: Existence of Optimal Control for he Diffusion Equation.
J. Optim. Theory Appl. 22, 91-100.
19. D. A. Wismer and R. Chattergy, Introduction to Nonlinear Optimization, 1979.
Ali Vahidian Kamyad received his B.Sc from Ferdowsi University of Mashhad, Iran,
and his M.Sc from Institute of Mathematics Tehran, Iran and Ph.D at Leeds University,
Leeds, England under supervisior of J. E. Rubio. Since 1972 he has been at the Ferdowsi
University of Mashhad, he is associated professor and his research interests are mainly on
optimal control of distributed paramerer systems and applications of Fuzzy theory.
Dept. of Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran.
e-mail:kamyad@@math.um.ac.ir.
Hadi Basirzadeh received his B.Sc from Shahid Chamran University of Ahwaz and his
M.Sc from Sistan and Baluchestan University ,Iran. Since 1992 he has been working at the
Shahid Chamran University of Ahwaz, and he is a Ph.D student under the direction of
Pro. Ali Vahidian Kamyad from Ferdowsi University of Mashhad. His research interests are
mainly on O.R and optimal control.
Dept. of Mathematics, Ferdowsi University of Mashhad, P. O. Box 1159-91775, Mashhad,
Iran.
e-mail:basirzad@@math.um.ac.ir.
Sohrab Effati received his B.Sc from Birjand University and his MSc from Institute of
Mathematics Tehran, and his Ph.D at Ferdowsi University of Mashhad, Iran. Since 1995
he has been working at Teacher Training University of Sabzevar. His research interests are
mainly on O.R and optimal control.
Dept. of Mathematics, Teacher Training University of Sabzevar, Sabzevar, Iran.
e-mail:effati@@math.um.ac.ir.