2. ´
J. A. Bohorquez V
More precisely, we extend Dijkstra’s guarded command language [Dij75] to allow recursively defined functions
in the context of Hehner’s predicative programming theory [Heh84] through DHL (the Dijkstra-Hehner algo-
rithmic language), define the concept of Hoare triples, give correctness conditions for general recursive programs
and prove as theorems of this theory, the correctness axioms for all commands of this extension of Dijkstra’s
language. The end result is a nice and simple theory of programming providing a smooth derivation method for
recursive as well as iterative sequential programs.
Basic notational conventions
Throughout this text, a one-argument function application over a variable or a constant is denoted by an infix
dot ‘.’. By true and false , we denote, besides the boolean values, the obvious 0-ary constant predicates. The
following logical connectives are listed in order of decreasing binding power (those listed as a pair have the same
precedence): ¬ denotes negation, denotes equality, ∨ and ∧ denote disjunction and conjunction respectively,
⇒ and ⇐ denote implication and consequence respectively, ≡ and ≡ denote equivalence and discrepancy respec-
tively. As usual, the symbols ∀ and ∃ denote the universal and the existential quantifier respectively; their scope
is delineated by a pair of angle brackets.
Generally, a lower case greek letter such as σ , denotes a finite sequence of program variable identifiers, for
instance, σ [x , y, z , w ]. This way, σ denotes the same sequence that σ , except for the fact that each of its
identifiers is decorated with an additional single quote to its right, that is, σ [x , y , z , w ]. We make similar
conventions for other superscripts (for instance, σ , σ a , σ ). ˆ
If S is a formula, expression S (x , y, z /x , y , z ) is obtained from S , by respectively replacing every occur-
rence of variables x , y, z with values x , y , z . Similarly, the notation [S ] stands for S universally quantified
over its free variables (bear in mind that in manipulations of [S ] we may be employing the properties of universal
quantifiers).
If σ [x , y, z ] and τ [w , p, q] are two finite sequences of values of the same length, the equality σ τ
abbreviates writing x w ∧ y p ∧ z q. Similarly, if S denotes an expression possibly mentioning
variables occurring in σ or σ , then a (meta)-expression such as ∀ σ :: ∃ σ :: S , is actually an abbreviation
of ∀ x , y, z :: ∃ x , y , z :: S . Similarly, the expression S (σ/σ ) is an abbreviation of S (x , y, z /x , y , z ).
The set of variables of a program determines its associated state space, its elements are called states, and are
characterized by all the possible values of such variables. With the previous conventions applied to the same state
space, lower case greek letters may be interpreted as different states of the same computer behavior.
In the rest of this note, we present the predicative programming theory and define a recursive version of the
guarded command language on it (Sect. 2) and, interpret (Sect. 3) the concepts of Hoare triples, conditional and
total correctness with respect to pre- and postconditions on that theory, besides defining and proving correctness
conditions for Dijkstra’s iterative command. Additionally, we present and justify two methods for proving total
correctness of general recursive programs. Finally, (Sect. 4) we illustrate the practice of program derivation in
DHL by showing a derivation scheme to evaluate a family of (non-tail linear) recursive functions in terms of a
do-loop construct.
2. Hehner’s predicative programming theory
We present and work with this very simple model of computation [Heh04b]. A specification is viewed as a bool-
ean expression (logical formula) whose variables represent quantities of interest. A state of the execution of a
program is a function defined on its variables, mapping each of them to a valid value according to its type. Given
a computer behavior, the set of all states it can access is called its associate space state. When an initial state is
provided as input, an execution of a program computes a final state as output.
To satisfy a specification, a computation must deliver a satisfactory final state. In other words, the given
initial state and the computed final state must make the specification true. In this formalism, specifications as
well as programs are identified with logical predicates describing the input-output relations defined by the initial
and final values of their respective variables. Verifications of specifications and programs correspond to simple
proofs about such predicates. Specifications, as well as programs, describe computer behavior or just executions.
A program is interpreted by a computer by means of executions that transform the values of their variables. We
have an implementation when the specification describes (is true of) each of its computations.
To talk about computer behaviors described by a program, the following convention is adopted, if x denotes
a program variable taking values in a given state space, x also (generically) denotes its initial value, and
3. An elementary and unified approach to program correctness
x denotes in general its final value if it exists. When we refer to more than one program execution on the
same state space, we could use additional notations to (generically) denote the initial and final values of x corre-
sponding to those executions, by adding some other type of ‘decoration’ to the symbols denoting x such as x ,
x , x i , x f . Usually, no confusion arises when distinguishing a variable from its initial value.
ˆ
2.1. Specifications
We look at specifications of computer behaviors as predicates on the initial values x , w , . . . and final values
x , w , . . . of some variables x , w , . . . in a given state space.
For example, suppose that x and w are two variables each of integer type; then,
x x +1 ∧ w w
specifies the behavior of a computer increasing by 1 the value of x and leaving w unchanged.
If we provide the input σ to an implementation of a specification S , the computer provides an output σ to
satisfy S . Therefore, for a specification to be implementable, there must be at least one satisfactory output for
each input: A specification S is called implementable if and only if ∀ σ :: ∃ σ :: S .
In the same variables, here is a second specification:
x > x.
This specification is satisfied by a computation that increases x by any amount; it may leave w unchanged or may
change it to any integer. The first specification is deterministic since there is just one answer for each initial state,
and the second is nondeterministic since there are several possible outputs for some initial states.
At one end, we have the specification true ; it is the easiest specification to implement because every imple-
mentation satisfies it. At the other end is the specification false , which is not satisfied by any implementation.
But false is not the only unimplementable specification. Here is another:
x ≥0 ∧ w 0.
If the initial value of x is nonnegative, the specification can be satisfied by setting variable w to 0. But if the initial
value of x negative, there is no way to satisfy the specification. Perhaps the specifier has no intention of providing
a negative input, but to the programmer, every input is a possibility. The specification should have been
x ≥0 ⇒ w 0. ( )
For a nonnegative initial x , this specification still requires variable w to be assigned 0. If there is no intention of
providing a negative value for x , then what would happen if we did provide it, has no importance whatsoever.
That is precisely what this specification says: for negative x any result is satisfactory.
Special specification notations
Given L a logical specification language, if S , S0 , S1 , R are specifications on L, as well as boolean expressions
b0 , b1 ; and x , and e are, respectively, sequences (of equal length) of variables and expressions correspondingly
taking values of the same type, we define the following special notations for classes of specifications, inspired by
some of Dijkstra’s [Dij75] guarded commands:
Immanence ok (σ σ )
Parallel Assignment x: e ok (x /e)
Composition S;R ∃ σ :: S (σ /σ ) ∧ R(σ/σ )
Selection if b0 → S0 [] b1 → S1 fi b0 ∨ b1 ⇒ (b0 ∧ S0 ) ∨ (b1 ∧ S1 )
In order to specify immanence or ‘no change’, we use ok to denote the identity relation between the initial and
final state: it specifies that the final values of all variables equal the corresponding initial values. It is satisfied by
a machine that does nothing. In the assignment notation, x and e are respectively (type compatible) sequences
of variables and expressions of the same length. For example,
x, y : x +w , x which is equivalent to x x +w ∧ y x ∧ w w
4. ´
J. A. Bohorquez V
specifies that the corresponding final values of x and y should be the sum of the initial values of x and w , and
the initial value of x ; the value w should be unchanged.
The if . . . fi and semi-colon notations combine specifications to make a new specification. They apply to all
specifications, not just implementable specifications. They are just logical connectives, like ∧ and ∨. But due to
the requirement that at least one of its conditions must hold, they have the nice property that if their operands
are implementable, so is the result.
The specification if b1 → S1 [] · · · [] bn → Sn fi can be implemented by a computer that behaves according
to either one of Si (i : 1 ≤ i ≤ n), whenever the value of bi is true. The specification S ; R corresponds to the
relational composition of S with R. It can be implemented by a computer that first behaves according to S , then
behaves according to R , with the final values from S serving as initial values for R. It is therefore a sequential
composition.
Refinements
If R , S are specifications, we say that S is refined by R (or that R refines S ) and denote it with the expression
S R , if every computer behavior satisfying R also satisfies S . Formally, we define it like this:
S R ≡ [S ⇐ R]
The brackets notation [P ] , where P is a predicate, denotes the application of Dijkstra’s everywhere operator
(universal closure) to P . The use of this operator in a expression like [ Q ⇒ R(x /e) ] , avoids having to refer to
the universal quantification on (the states) σ and σ .
The above refinement simply means finding another specification that is everywhere equal or stronger. In
practice, in order to prove that S is refined by R , due to the (predicate logic) generalization theorem, it is
possible to calculate inside the square brackets and simply prove S ⇐ R.
Here are two examples:
x >x (x x +1 ∧ w w)
(x x +1 ∧ y y) x: x +1
In each case, the left hand side is implied by the right hand side for all initial and final values of all variables.
2.2. An algorithmic scheme for Dijkstra’s guarded commands
A program is a specification of computer behavior; it is therefore a predicate in the initial and final state. Not every
specification is a program. A program is an “implemented” specification, one that the computer can execute. To
be so, it must be written in a restricted notation.
With respect to algorithmic or programming languages we define the following concepts related with functions
and expressions. We will call an expression primitive, if it is a term obtained by the composition of functions and
constants predefined in the given programming language. The concept of refinement allows programs to define
new functions:
Definitions (User defined functions and implemented expressions) In the context of a programming language, an
ordinary (partial, possibly multi-valued) mathematical function f defined on a certain state space, is called user-
defined if there is a program P (defined on the same state space) refining an assignment of the form x : f .s,
where s is a sequence holding the arguments (expressions) for an evaluation of f , and x a variable (or sequence of
variables) holding the result (value or sequence of values) of such evaluation. Recursive user-defined functions are
allowed; that is, program P above, can include terms corresponding to evaluations of f . An expression is called
implemented if it is a composition of primitive and user-defined functions admitted by the given programming
language.
We proceed to define an algorithmic language DHL expressive enough to include or define all of Dijkstra’s
guarded commands. We will not make explicit the primitive functions allowed by DHL.
Definition 1 A program in the algorithmic language DHL is defined according to the following expressions:
1. Immanence: ok is a program in DHL ; it is also known as skip.
5. An elementary and unified approach to program correctness
2. Assignment: If x is any variable (or sequence of variables) and e is an implemented expression (or a sequence
of implemented expressions of equal length and equal corresponding types to x ) in DHL, then x : e is a
program in DHL.
3. Composition: If P , Q are programs in DHL then P ; Q is a program in DHL.
4. Selection: If b0 , . . . , bn−1 are boolean expressions implemented in DHL, and P0 , . . . , Pn−1 are programs in
DHL, then if b0 → P0 [] · · · [] bn−1 → Pn−1 fi is also a program in DHL.
5. Specified module: An implementable specification that is refined by a program in DHL is itself a program in
DHL.
As we already explained, in (2) and (4), it is not stated which expressions are implemented in DHL ; that set
may vary from one implementation to another. Part (2) includes the case in which the assignment has the form
x : f .s where f is a user-defined function admitted by DHL. To execute this assignment, we just execute the
program P (in DHL) refining it. The refinement acts as a procedure declaration; x : f .s acts as a procedure
name, and P as the procedure body; in this case, the use of the assignment x : f .s acts as a call. Recursion is
allowed; we may use assigments involving evaluations of f , in order to obtain program x : f .s. Part (5) states
that any implementable specification S is a program in DHL if there is a program P in DHL such that S ⇐ P
is a theorem. To execute S , we just execute the program P (in DHL) refining it. Again, the refinement acts as
a procedure declaration; S acts as a procedure name (or module label), and P as the procedure body; in this
case, use of the name of (or label denoting) S acts as a call; since mentions of the name or label for S may occur
within P , recursion is also allowed.
Example 1 (x 0 ⇒ x > 0) x: x +1.
Proof.
(x 0 ⇒ x > 0) x: x +1
≡ definition
[x : x +1 ⇒ (x 0 ⇒ x > 0)]
As we already pointed out, it is enough to calculate:
x: x +1 ⇒ (x 0 ⇒ x > 0)
≡ propositional logic
x : x +1 ∧ x 0 ⇒ x > 0
⇐ definition of “x : x +1”
x x +1 ∧ x 0 ⇒ x > 0
≡ arithmetic
true
Here is an example involving a recursive program expressed in DHL. Let x be a integer variable, the speci-
fication x 0 says that the final value of variable x is zero. It becomes a program by refining it, which can be
done and written in many ways. This is one:
Example 2 (A Recursive Program) x 0 if x 0 → ok [] x 0→x: x −1; x 0fi
In standard predicate notations, this refinement is equivalent to
∀x , x :: x 0 ⇐ ((x 0∧x x ) ∨ (x 0 ∧ ∃x :: x x −1 ∧ x 0 ))
which is easily proven. Observe that there is no guarantee of termination (in the case of a negative input) for this
program.
3. Hoare and Dijkstra’s theory
If a specification (or program) prg refines the one given in ( ), the expression (x ≥ 0 ⇒ w 0) prg coincides
with the following Hoare triple [Hoa83]
x ≥ 0 {prg } w 0
expressing the conditional correctness of the execution prescribed by prg with respect to precondition x ≥ 0 and
postcondition w 0.
6. ´
J. A. Bohorquez V
3.1. Interpreting Hoare triples
In general,
Q {S } R
where S is an specification (or program in DHL), Q is a precondition and R a description of the result of its
execution, may be interpreted as: “If assertion Q is true before initiation of a program S (at any state whatso-
ever) then the assertion R will be true on its completion (if it does happen)”. Observe that Q {S } R must be
an absolute (or constant) predicate, that is, it is true in every state or otherwise, false everywhere.
Example 3 The triple x 0 {x : x + 1} x > 0 is therefore interpreted as: “x : x + 1 must fulfill specification
x 0 ⇒ x > 0”. (See example 1).
Generally, Q {S } R translates into (Q ⇒ R ) S , where R is an expression obtained from R, by decorating
each of its variables with an apostrophe.
Notice that the implementability condition only ensures the existence of a final value for each initial one, but
nothing is said about reaching this final value after a finite number of steps. Similarly, the conditions defining
Hoare notation are only demanded if the program execution halts.
Proposition 1 If E is an expression defined in every state satisfying Q, then Q {x : E } R is equivalent to
[ Q ⇒ R(x /E ) ].
Proof. If α stands for the sequence of variables different from x of the associated state space, the following
calculation will do
Q {x : E } R
≡ interpretation of Hoare triple
(Q ⇒ R ) x: E
≡ definition of refinement
[x : E ⇒ (Q ⇒ R )]
≡ definition of x : E
[ok (x /E ) ⇒ (Q ⇒ R )]
≡ predicate logic (one point rule)
[(Q ⇒ R )(x , α /E , α)]
≡ neither x nor α are free in Q
[Q ⇒ R (x , α /E , α)]
≡ substitution
[Q ⇒ R(x /E )]
Proposition 2 If E and F are expressions defined in every state, then expressions Q {x : E; y : F } R and
[ Q ⇒ R(y / F )(x / E ) ] are equivalent.
Proof. By Proposition 1, and assuming that x : E; y : F and x , y : E , F (x / E ) are equivalent specifica-
tions, it is enough to show the following:
Q {x : E ; y : F } R
≡ assumption
Q {x , y : E , F (x / E )} R
≡ proposition 1
[Q ⇒ R(x , y / E , F (x / E ))]
≡ textual substitution property
[Q ⇒ R(y / F )(x / E )]
We now prove our assumption. If γ denotes the sequence of variables different from x and y of the associated
state space, we have
x: E ; y: F
≡ definition of assignment
7. An elementary and unified approach to program correctness
ok (x / E ); ok (y / F )
≡ definition of ok and substitution
(x E ∧ y y ∧ γ γ ); (x x ∧ y F ∧ γ γ )
≡ definition of composition
∃ x0 , y0 , γ0 :: x0 E ∧ y0 y ∧ γ0 γ ∧ x x0 ∧ y F (x , y, γ / x0 , y0 , γ0 ) ∧ γ γ0
≡ Leibniz equality theorems
∃ x0 , y0 , γ0 :: x0 E ∧ y0 y ∧ γ0 γ ∧ x E ∧ y F (x / E ) ∧ γ γ
≡ predicate calculus
∃ x0 , y0 , γ0 :: x0 E ∧ y0 y ∧ γ0 γ ∧ x E ∧ y F (x / E ) ∧ γ γ
≡ predicate logic (one point rule)
x E ∧ y F (x / E ) ∧ γ γ
≡ definition of ok
ok (x , y / E , F (x / E ))
≡ definition of assignment
x , y : E , F (x / E ))
Example 4 We show true {if x ≤ y → skip [] x > y → x , y : y, x fi } x ≤ y.
Proof. This statement, denotes the refinement
x ≤y if x ≤ y → skip [] x > y → x , y : y, x fi
hence, the following calculation should suffice:
if x ≤ y → skip [] x > y → x , y : y, x fi ⇒ x ≤ y
≡ definition of if . . . fi ; tricotomy of ≤
(x ≤ y ∧ ok ) ∨ (x > y ∧ x , y : y, x ) ⇒ x ≤ y
≡ propositional calculus: case separation
(x ≤ y ∧ ok ⇒ x ≤ y ) ∧ (x > y ∧ x , y : y, x ⇒ x ≤ y )
⇐ definitions of ‘ok ’ and ‘: ’
(x ≤ y ∧ ok ⇒ x ≤ y) ∧ (x > y ∧ x y ∧ y x ⇒ x ≤ y )
≡ arithmetic
true
Proposition 1 gives a necessary and sufficient condition for the conditional correctness of an assignment.
Now, for the case of the conditional command, we give a sufficient condition.
[Q ⇒ b0 ∨ b1 ] ∧ (Q ∧ b0 ){S0 }R ∧ (Q ∧ b1 ){S1 }R
≡ definition of refinement and Hoare triples
[Q ⇒ b0 ∨ b1 ] ∧ [(Q ∧ b0 ⇒ R ) ⇐ S0 ] ∧ [(Q ∧ b1 ⇒ R ) ⇐ S1 ]
≡ propositional logic
[Q ⇒ b0 ∨ b1 ] ∧ [Q ∧ b0 ∧ S0 ⇒ R ] ∧ [Q ∧ b1 ∧ S1 ⇒ R ]
≡ predicate calculus
[(Q ⇒ b0 ∨ b1 ) ∧ (Q ∧ ((b0 ∧ S0 ) ∨ (b1 ∧ S1 )) ⇒ R )]
⇒ strengthening of antecedent
[(Q ⇒ b0 ∨ b1 ) ∧ (Q ∧ (b0 ∨ b1 ) ∧ ((b0 ∧ S0 ) ∨ (b1 ∧ S1 )) ⇒ R )]
≡ predicate calculus
[(Q ⇒ b0 ∨ b1 ) ∧ (Q ∧ (b0 ∨ b1 ⇒ (b0 ∧ S0 ) ∨ (b1 ∧ S1 )) ⇒ R )]
⇒ propositional logic
[(b0 ∨ b1 ⇒ (b0 ∧ S0 ) ∨ (b1 ∧ S1 )) ⇒ (Q ⇒ R )]
≡ definition of if . . . fi
[(Q ⇒ R ) ⇐ if b0 → S0 [] b1 → S1 fi ]
≡ refinement notation
(Q ⇒ R ) if b0 → S0 [] b1 → S1 fi
≡ interpreting as a Hoare triple
Q {if b0 → S0 [] b1 → S1 fi} R
We have proved the following proposition:
8. ´
J. A. Bohorquez V
Proposition 3 If b0 , b1 are conditions (boolean expressions) defined in every state satisfying Q, then Q {if b0 →
S0 [] b1 → S1 fi} R, whenever [Q ⇒ b0 ∨ b1 ] ∧ (Q ∧ b0 ){S0 }R ∧ (Q ∧ b1 ){S1 }R.
Termination
So far, we have talked only about the result of a computation, not about how long it takes. Actually, among the
basic expressions defining a program in DHL, there is no problem in practice with the termination of its execu-
tions, except for (see example 2) the ones making recursive calls, since it is not possible to rule out executions
producing infinite chains of such calls.
Consider a partial recursively defined function g, on a given space state. Let us say
p.σ, if a.σ
g.σ (0)
h(g, Y .σ ), if b.σ
where function p and conditions a, b are implementable in DHL and σ symbolizes the input values (initial state)
of g. On initial state σ , conditions a and b correspond respectively to all non-recursive and all recursive cases
of its domain; p.σ (depending on σ ) is the final value given by g in the non recursive cases, and h(g, Y .σ ) is
an expression implemented in DHL and involving at least one recursive invocation of g on a set of expressions
Y .σ , depending on the initial state σ . The set Y .σ contains all values on which g recurs including the values
of nested-inside recursive calls of g. This set might be just a singleton; of course, it is empty when condition b
above is identically false as a predicate (b ≡ false).
Definition 2 We call a function g implemented in DHL, well defined on a predicate C (on the associated state
space) when one of the following conditions holds:
1. g is a primitive function of DHL defined on C ,
2. g is a function as described above in (0), with functions p, a, b and h well defined on C , and either condition
b is identically false, or there exists a well founded relation1 ≺ on the domain of g such that2
(a) C ⇒ dom.g
(b) dom.g ≡ a ∨ b
(c) a ⇒ dom.p
(d) a ∧ b ≡ false
(e) [ b.σ ∧ ζ ∈ Y .σ ⇒ ζ ≺ σ ].
That is, g is defined on every state fulfilling C ; a and b are disjunctive conditions that jointly cover all
states belonging to the domain of g ; furthermore, every evaluation of g on a state σ , (for which condition
b holds) recurs on states (on the domain of g) ≺-smaller than σ . We say that an expression is well defined if
it is a composition of well defined functions.
Given a program S in DHL and a predicate C , we say that S halts on (initial condition) C , if every execution
of S initiating in a state satisfying C , must terminate. The formal definition of this concept follows.
Definition 3 A program S in DHL halts on condition C if one of the following cases holds:
(i) S skip
(ii) S (x : e) where x is any variable (or sequence of variables) and e is an implemented and well defined
expression on C (or a sequence of such expressions, type compatible and of equal length to x ).
(iii) S if b0 → P0 [] . . . [] bn−1 → P0 fi, where b0 . . . bn−1 are implemented and well defined boolean expres-
sions on C , condition C implies that at least one of bi ’s holds (0 ≤ i < n), and programs P0 . . . Pn−1
terminate on C .
(iv) S P ; Q, and programs P and Q terminate respectively, on C and on C ; P ,
1 A pair (A, ≺) is called a well founded set, if ≺ is a well founded relation on the set A. A binary relation ≺, on a set A, is well founded if
every nonempty subset of A has a ≺-minimal element.
2 We will use the unary predicate dom.f to represent in predicate form, state membership to the domain of a function f .
9. An elementary and unified approach to program correctness
(v) S is both, logically equivalent to x : g.σ (where g is a function given by by scheme (0) and well defined on
C ), and also, refined by program if a.σ → P [] b.σ → H (S ) fi where P is a program refining x : p.σ ,
and H (S ) a program refining x : h(g, Y .σ ).
In order to ensure the termination of every possible execution of a recursive program prog in DHL (which basi-
cally would be given in terms of an assignment of the form x : g.σ such as in definition 3(ii)) we will associate a
function (called ‘size’ for the time being) with it. Function ‘size’ would be defined on the state space associated
to prog, taking values on a well founded set, and its value in any state of an execution of prog would decrease in
terms of the associated well founded relation, every time a recursive call is made.
Total correctness
We say that a program (or specification) S is totally correct with respect to a precondition Q and a postcondition
R written
{Q} S {R},
if besides being conditionally correct, every execution initiating in a state satisfying Q halts.
Remark Definition 3 guarantees that Propositions 1, 2 and 3 continue being valid if we replace conditional cor-
rectness by total correctness in their statements about Hoare triples. For instance, the statement of proposition
3 becomes
{Q} if b0 → S0 [] b1 → S1 if {R} whenever [Q ⇒ b0 ∨ b1 ] ∧ {Q ∧ b0 }S0 {R} ∧ {Q ∧ b1 }S1 {R}.
3.2. Verifying general recursive programs
As pointed out before, a specification S with S ≡ (x : g.σ ) where g a well defined function given by scheme (0)
becomes a program in DHL, and g a user-defined function, through a refinement such as
S if a.σ → P [] b.σ → H (S ) if (1)
where P is a program refining x : p.σ , and H (S ) a program refining x : h(g, Y .σ ).
The next proposition tacitly makes use of an inductive theorem on the correctness of general recursive pro-
grams that we developed in [Boh07].
Proposition 4 Consider a program S ≡ (x : g.σ ) with g a well defined function by a user of DHL through
scheme (0) and refinement (1) then, in order to verify
{Q} x : g.σ {R},
since S ’s termination depends essentially on g being well defined, it is enough
1. to prove [ Q ⇒ R(x /g.σ ) ] by proposition 1, which can be done by structural induction on the definition of
g ; or,
2. applying proposition 3 and (1), to show by structural induction that
(i) [ Q ⇒ b ∨ c ]
(ii) {Q ∧ b} S {R} (correctness of basic cases)
(iii) {Q ∧ c} H (S ) {R} (correctness of recursive cases) using as an inductive hypothesis the correctness of
all recursive calls, that is,
{Q(σ/ζ )} x : g.ζ {R(σ/ζ )} for all ζ ∈ Y .σ .
If the (recursive) mathematical definition of function g is directly available, the first option is easier and
practical; otherwise, if one only counts with an indirect definition of g given through an implementation like the
one given in (1), the second choice is the only viable option.
Next example informally introduces syntax for declaring user-defined functions in DHL.
10. ´
J. A. Bohorquez V
Example 5 Consider recursive program pf given by the following code:
| [ fun pf (x : int) ret r : int
if x ≥ 0 → r : x
[] x < 0 → x : x + 1; r : pf (x )
fi
{R : (x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0)}
ret r ] |
this program implements the recursive function f : Z → Z defined as
x if x ≥ 0
f .x
f (x +1) if x < 0
f is well defined since its defining cases are disjoint and cover all possibilities for an integer x , besides this,
x +1 ≺ x for x < 0, if relation x ≺ y ≡ (y < x < 0) ∨ (x ≥ 0 ∧ y < 0), is defined for integers x and y. Relation
≺ is well founded. The function ‘size’ mentioned previously to ensure termination, is in this case, the identity
function on Z.
It is easy to check that specification r : f .x is refined by program pf . We verify {true} r : f .x {R}, i.e.
{true} r : f .x {(x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0)}.
To do it, we show by induction on the definition of f ,
((x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0))(r /f .x )
that is,
(x ≥ 0 ∧ f .x x ) ∨ (x < 0 ∧ f .x 0)
let us see,
• Case x ≥ 0 :
x ≥ 0 ∧ f .x x
≡ hypothesis: x ≥ 0 ; definition of f
x x
• Case x < 0 :
x < 0 ∧ f .x 0
⇐ definition of f
(x −1 ∧ f (x +1) 0) ∨ (x < −1 ∧ f (x +1) 0)
⇐ definition of f
(x +1 ≥ 0 ∧ f (x +1) x +1) ∨ (x +1 < 0 ∧ f (x +1) 0)
≡ substitution
((x ≥ 0 ∧ f .x x ) ∨ (x < 0 ∧ f .x 0))(x /x +1)
≡ induction hipothesis: x +1 ≺ x
true
Option (b) will allow us to use program pf implementing function f , to prove {true} r : f .x {R}. Thus, it
suffices to show
(i) [x ≥ 0 ∨ x < 0]
(ii) {x ≥ 0} r : x {(x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0)}
(iii) {x < 0} x : x +1; r : pf (x ) {(x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0)}
as induction hypothesis, we use:
{true} r : pf (x +1) {(x +1 ≥ 0 ∧ r x +1) ∨ (x +1 < 0 ∧ r 0)}
Proof. Firstly, (i) is a simple theorem of arithmetic. To prove (ii), it will do to show x ≥ 0 ⇒ (x ≥ 0 ∧ x x ), a
trivial result. The proof of (iii) reduces to prove x < 0 ⇒ pf (x +1) 0, under hypothesis
(x +1 ≥ 0 ∧ pf (x +1) x +1) ∨ (x +1 < 0 ∧ pf (x +1) 0)
which is evident if x < 0 is separated into two cases: x −1 and x < −1.
11. An elementary and unified approach to program correctness
3.3. Iterations
In the context of this theory, given b a boolean expression and Q a program, the iterative program:
do b → Q od
is seen as the implementation (cic, for short) of the evaluation of a partial function f recursively defined.
In order to explain this calculation, suppose for a moment that Q is a deterministic program; hence, to every
initial state σ of a terminating execution of Q, we can associate its corresponding final value σ ∗ . In this way, it
is possible to regard program Q as a transformation function σ → σ ∗ .
Therefore, program cic is realized as follows:
cic if ¬b → ok [] b → Q ; cic fi, (2)
corresponding to an implementation of specification
σ : f .σ where f is a partial function defined on its domain:
σ if ¬b.σ
f .σ (3)
f .σ ∗ if b.σ
That is,
σ : f .σ cic. (4)
As we have pointed out before, in order for f be well defined, the existence of a well founded relation ≺ on its
domain, is needed, in such way that for every state σ satisfying property b, necessarily σ ∗ ≺ σ . With this last
condition in mind, it is possible to suppress the assumption about program Q being deterministic, if we think of
a fixed execution of cic ; since it will be impossible for it to access the same state more than once. In other words,
each execution of cic determines its corresponding (well defined) function f .
The fact that cic implements specification σ : f .σ causes
{P } cic {R}
to be translated into
(P ⇒ R(σ/f .σ )) cic (5)
that is to say, σ , the corresponding final state, coincides with f .σ . Here, σ represents the complete collection
of variables shaping an initial state of any execution of cic.
The following proposition gives sufficient conditions to ensure total correctness of an iteration.
Proposition 5 {P } do b → Q od {R} whenever
(i) [P ∧ ¬b ⇒ R]
(ii) {P ∧ b} Q {P } and
(iii) There exists a well founded relation ≺ on the state space, such that
{P ∧ b ∧ σ K } Q {σ ≺ K } where K represents a constant state value.
Proof. Given a fixed execution of cic do b → Q od, we have seen that there is a partial function f recursively
defined according to (3), in such a way that σ : f .σ cic ; besides this, cic answers to the description given in
(2), and Q corresponds to the assignment σ : σ ∗ .
As a consequence of what we just have said, condition (iii) translates into condition
{P ∧ b ∧ σ K } σ : σ ∗ {σ ≺ K }, but then,
{P ∧ b ∧ σ K } σ : σ ∗ {σ ≺ K }
≡ Proposition 1
P ∧ b ∧ σ K ⇒ σ ∗≺ K
≡ Leibniz’s Principle
P ∧ b ∧ σ K ⇒ σ ∗≺ σ,
this ensures that f is a well defined function with domain the set of states fulfilling P , and due to (4) and (3), we
have that every execution of cic starting in a state satisfying P , halts.
12. ´
J. A. Bohorquez V
It only remains to prove the conditional correctness of cic. To do this we define R ≡ R(σ/f .σ ), due to the
argument given in (5), the following calculation suffices:
(P ⇒ R ) ⇐ (if ¬b → ok [] b → Q; cic fi)
≡ definition of selection ; propositional calculus
(P ⇒ R ) ⇐ (¬b ∧ ok ) ∨ (b ∧ (Q; cic))
≡ propositional calculus
(P ∧ ¬b ∧ ok ⇒ R ) ∧ (P ∧ b ∧ (Q ; cic) ⇒ R )
≡ definition of ok (σ f .σ ), Leibniz’s principle ; definition of composition
(P ∧ ¬b ⇒ R) ∧ (P ∧ b ∧ Q ∧ cic(σ/σ ∗ ) ⇒ R ))
⇐ (i) and (ii) ; (ii) equivalent to (P ∧ b ⇒ P (σ/σ ∗ )) Q
P ∧ b ∧ (P ∧ b ⇒ P (σ/σ ∗ )) ∧ cic(σ/σ ∗ ) ⇒ R
⇐ induction hypothesis: (P ⇒ R )(σ/σ ∗ ) cic(σ/σ ∗ )
P ∧ b ∧ P (σ/σ ∗ ) ∧ (P (σ/σ ∗ ) ⇒ R (σ/σ ∗ )) ⇒ R
⇐ propositional calculus
b ∧ R (σ/σ ∗ ) ⇒ R
≡ R ≡ R(σ/f .σ )
b ∧ R(σ/f .σ )(σ/σ ∗ ) ⇒ R(σ/f .σ )
≡ substitution
b ∧ R(σ/f .σ ∗ ) ⇒ R(σ/f .σ )
≡ definition of f : b ⇒ f .σ f .σ ∗
true
As we have just seen, in this theory, iteration is defined as a particular type of recursion, tail recursion3 to be
precise.
To conclude this section, we show one last example illustrating that our verifying method encompasses not
just linear recursion (at most one recursive call for case), but any general recursive scheme.
Example 6 (McCarthy’s 91 function) This function is defined for integer x by
x − 10, if x > 100
g.x (6)
g(g(x +11)), if x ≤ 100.
Except for condition (2.e) in definition 2, it is easy to check that g fulfills all conditions for being well defined. In
order to prove this remaining condition, we define the following partial order (noted ‘ ’) on Z :
x y ≡ y ≤ 100 ∧ y < x , for all integers x and y.
Observe then, that is a well founded relation such that
if x > 100 then x is -minimal,
x +11 x if x ≤ 100 and
g(x +11) x if x ≤ 100.
This last fact may be proven by ordinary mathematical induction via its equivalence with proposition: g(x +11) > x
if x ≤ 100 .
Now, function g and specification r : g.x become respectively, a user defined function, and a program in
DHL, through the refinement
r : g.x if x > 100 → r : x −10 [] x ≤ 100 → r : g(g(x +11)) fi
besides this, the fact that g is well defined guarantees the termination of program r : g.x for any initial state in
which x takes an integer value.
We want to verify the following correctness statement:
{true} r : g.x {r f .x }
where f .x if x > 101 → x −10 [] x ≤ 101 → 91 fi.
3 Tail recursion is a recursive scheme having just one direct (such as the one in ((3))) recursive call.
13. An elementary and unified approach to program correctness
Observe that
{true} r : g.x {r f .x }
⇐ option (a) in proposition 4 ; g well defined
[g.x f .x ] ;
thus, it is sufficient to prove the equality of f and g as functions defined on Z. Since g.101 101−10 91 f .101,
by definitions of f and g, it only remains to show that g.x f .x for x ≤ 100.
g.x
Definition of g ; x ≤ 100
g(g(x +11))
x +11 x ; inductive hypothesis
g(f (x +11))
Definition of f ; x ≤ 100
g(x +1) if 100 < x +11 ≤ 111
g(91) if x < 90
x < 90 ⇒ 91 x ; x ≤ 100 ⇒ x +1 x ; inductive hypothesis
f (x +1) if 90 < x +1 ≤ 101
f (91) if x < 90
Definition of f
91
Definition of f ; x ≤ 100
f .x
4. Program derivation in DHL
For the practical aspects of program derivation in DHL, we can take advantage of both the methods of Dijkstra’s
school of derivation [DF84, Kal90, Bac03] and Hehner’s specified blocks approach [Heh05] to obtain a smooth
and unified derivation style based on the calculative approach to formal reasoning [Dij94, DS90, GS93, Heh04a,
Heh07].
The following example shows a derivation scheme to code a family of non-tail linear recursive functions in
terms of tail recursive (or do-loop) programs.
Example 7 (A Derivation Scheme) Suppose we want to design a program to calculate, given data N , the value
F .N where F is a well defined function given by the following recursive definition:
m.x , if b.x
F .x (7)
h.x ⊕ F (g.x ), if ¬b.x
where functions m, h and g, condition b, and a binary, associative and commutative operation ⊕ with identity
element e, are implementable in DHL, and there is also, a well founded relation ≺ defined on the domain of F
such that
1. x is ≺-minimal ⇒ b.x , and
2. ¬b.x ⇒ g.x ≺ x .
We may specify this problem (using the label Spc) as follows:
Spc : r F .N or equivalently, {true} Spc {r F .N }
Inspired by the ‘tail invariant’ method [Kal90], and the shape of (7), we propose to refine Spc in terms of sequencing
two new specifications, Inic and Conc :
Spc Inic ; Conc
where Conc : r r0 ⊕ F .x0
and Inic : r0 e ∧ x0 N .
We decorate with subindex 0 those values of variables in the final state of Inic and the initial state of Conc.
14. ´
J. A. Bohorquez V
Clearly, Inic ∧ Conc ⇒ Spc.
We now proceed to analyze the two cases for the value of condition b in (7):
For the case b.x0 we have
Conc ∧ b.x0
≡ definition
r r0 ⊕ F .x0 ∧ b.x0
⇒ definition of F
r r0 ⊕ m.x0
≡ Let r1 r0 ⊕ m.x0
r r1 .
Suppose now ¬b.x0 .
Conc
≡ definition
r r0 ⊕ F .x0
≡ definition of F and assumption
r (r0 ⊕ h.x0 ) ⊕ F (g.x0 )
≡ Let r1 r0 ⊕ h.x0 and x1 g.x0
r r1 ⊕ F .x1
≡ notation
Conc1
Conc1 is obtained from Conc, by redecorating its free variables having subindex 0 with subindex 1.
We can easily refine Inic as follows
Inic x, r : N,e
The previous calculations allow us to refine Conc like this
Conc if b.x → r : r ⊕ m.x
[] ¬b.x → x , r : g.x , r ⊕ h.x ; Conc
fi
which in terms of a do-od cycle is equivalent to
Conc do ¬b.x → x , r : g.x , r ⊕ h.x od
; r : r ⊕ m.x
5. Previous and related work
The idea of describing programs and algorithms with boolean expressions (initially called predicate assertions),
and proving properties about programs initiates with the seminal work of R.W. Floyd and C.A.R. Hoare. In his
paper [Flo67], Floyd attached logical assertions to the arcs of flowcharts, the common early way of expressing
algorithms through a graphlike notation, in order to reason about program correctness with respect to specifi-
cations. Hoare was the first to explore the logical foundations of computer programming [Hoa69] by proposing
sets of axioms and rules of inference especially devised to prove properties of computer programs.
Following Floyd’s contribution, Z. Manna [Man74, Man80] applied this new concept of program correctness
and logical relations between program flowcharts and its specifications mainly to the problem of program verifi-
cation which is concerned with proving (or disproving) the correctness of intended algorithms with respect to a
formal specification.
E.W. Dijkstra [Dij68, Dij75] subsequently extended and refined Hoare’s proposal through the concept of
‘predicate transformers’. Based on this theory, he developed a formal calculus [Dij76] for the derivation of pro-
grams written in a very simple language that gave linear form (in contrast with flowcharts) to non-deterministic
sequential programs by statements encapsulating ‘conditional branching’ which he expressed through his so called
guarded commands. This calculus together with his ‘calculative style’ of formally proving theorems [DS90, GS93]
rendered an elegant and practical program derivation methodology that attracted many adherents and practitio-
ners, among them W. Feijen, A. J. M. van Gasteren [vG90, FvG96], David Gries [Gri81], R. Backhouse [Bac03],
15. An elementary and unified approach to program correctness
A. Bijlsma [vGB98], J. L. A. van de Snepscheut [vdS93], A. J. Martin, M. Rem [MR84], A. Kaldewaij [Kal90],
M. Fokkinga [BF01], and D. Michaelis [BM06].
Soon after Dijkstra’s publication of [Dij76], Eric Hehner proposed to “stop thinking of programs as mere text,
and start thinking of them as mathematical expressions in their own right. . . . like any mathematical expression,
a program can stand for its meaning all by itself” [Heh76, Heh06]. Although, Niklaus Wirth [Wir71] was the
first to propose refining programs from specifications as an orderly way of designing them, a formal approach
to stepwise refinement for program construction from specifications appears for the first time, in the form of
two ‘schools of program refinement’ that we could respectively name as the predicative and the transformative
approach. Formal program refinement is an approach that views programming as a process of transforming a
specification into a program according to mathematical laws ensuring that correctness is preserved at each of its
steps. The transformative approach to formal program refinement initiates with Back’s [Bac78, Bac80] applica-
tion of Hehner’s proposal to develop his “Refinement Calculus” which enlarges Dijkstra’s notation of guarded
commands generalising his concept of ‘predicate transformers’ to admit specifications. C. Morgan [Mor90a]
J. M. Morris [Mor90b] independently made further contributions to the development of this approach to pro-
gram refinement.
What we call predicative approach to formal program refinement corresponds to the use by Hehner and Hoare
of the phrase “programs are predicates” as a motto [Hoa84, Hoa92, Heh89]. Hehner’s ideas about thinking of
programs as logical formulas inspired two parallel but very similar theories of programming: Predicative pro-
gramming (Hehner’s version of the ‘programs are predicates’ theory), and Unifying theories of programming due
to Hoare and He Jifeng.
Predicative programming [Heh04b] is a very simple and practical theory based on first order logic that helps
with the practical aspects of program specification, design and correctness. Its basic assumptions avoid many
complications that come from other approaches based on more complex logics.
Hoare’s version of the “programs are predicates” theory of programming [Hoa84, HJ87] was developed in
colaboration with He Jifeng and based on Hehner’s approach [Heh84], in the context of a wide ranging scien-
tific theory: Unifying theories of programming (UTP for short) [HJ98] which shows how denotational semantics,
operational semantics and algebraic semantics can be combined in a unified framework for the formal spec-
ification, design and implementation of programs and computer systems. This framework, aimed to expose
the mathematical laws underlying a general theory of programming, is based on the calculus of Tarski’s the-
ory of relations enriched with his fixed point theory and applied to Dijkstra’s non-deterministic programming
language.
Using the fixed point theory of the UTP framework, we proved [Boh07] that simple regularity conditions on
recursively defined functions allow one to prove correctness of general recursive programs by induction on the
values of their domain (appying induction hypotheses exactly on the values where the functions recur). However,
this kind of structural induction (on syntax) is a common metalogical technique in first order logic that fits well
with predicative programming theory.
More generally, predicative programming is an ample framework for expressing programming theories that
favors the use of specifications rather than assertions (logical formulas that are intended to be true whenever
execution passes the point in the program where they are located). In fact, in an analogous way as we expressed
Dijkstra’s programming formalism, it would be possible to embed Manna’s correctness theory for program flow-
charts [Man80] in the predicative programming theory [Heh05] in terms of ‘guarded assigments’ and unstructured
goto’s (representing the arcs of the flowchart) joining cut points (the nodes of the flowchart) and, similarly prove
its corresponding correctness theorems.
6. Conclusions
We have shown that it is possible to develop a simple and practical theory of sequential imperative program
correctness unifying both iterative and recursive commands. Our assertion about the simplicity of this theory
is justified by the fact that it is expressed in first order logic in contrast with the usual approaches requiring
higher order logic. Actually, structural induction, usual in the study of first order logic, is the only metalogical
device we use. The proof power of Dijkstra’s calculative logic allows us to call this approach practical. It remains
for future work, to compare the methodology proposed with the existent software-based program verification
methods.
16. ´
J. A. Bohorquez V
Acknowledgements
The author thanks Eric Hehner, Lex Bijlsma, and the anonymous referee for their suggestions to improve the
presentation of this paper. He also thanks Jorge Villalobos for his advice about writing and stylistic matters
concerning this note.
References
[Bac78] Back RJR (1978) On the Correctness of Refinement Steps in Program Development. PhD thesis, University of Helsinki. Also
available as report A-1978-5
[Bac80] Back RJR (1980) Correctness preserving program refinements: proof theory and applications, volume 131 of Mathematical
Center Tracts. Mathematical Centre, Amsterdam
[Bac03] Backhouse R (2003) Program Construction: Calculating Implementations from Specifications. Wiley, New York
[BF01] Backhouse R, Fokkinga M (2001) The associativity of equivalence and the towers of Hanoi problem. Inf Process Lett
77(2–4):71–76
[BM06] Backhouse R, Michaelis D (2006) Exercises in quantifier manipulation. In: Uustalu T (ed) MPC, volume 4014 of Lecture
Notes in Computer Science, pp 69–81. Springer, Berlin
[Boh07] ´
Bohorquez JA (2007) An inductive theorem on the correctness of general recursive programs. Logic Journal of the IGPL
15(5–6):373–399
[Boh08] ´
Bohorquez JA (2008) Intuitionistic logic according to Dijkstra’s calculus of equational deduction. Notre Dame J Form Log
49(4):361–384
[DF84] Dijkstra EW, Feijen WHJ (1988) Een Methode van Programmeren. Academic Service, Den Haag, 1984. Also available as A
Method of Programming. Addison-Wesley, Reading
[Dij68] Dijkstra EW (1968) Go to statement considered harmful. Commun ACM 11(3):147–148
[Dij75] Dijkstra EW (1975) Guarded commands, nondeterminacy and formal derivation of programs. Commun ACM 18(8):453–457
[Dij76] Dijkstra EW (1976) A discipline of programming. Prentice-Hall Inc., Englewood Cliffs. With a foreword by C.A.R. Hoare,
Prentice-Hall Series in Automatic Computation
[Dij94] Dijkstra EW (1994) How computing science created a new mathematical style. EWD 1073 in The writings of Edsger
W. Dijkstra, 2000. http://www.cs.utexas.edu/users/EWD
[DS90] Dijkstra EW, Scholten CS (1990) Predicate calculus and program semantics. Springer, Berlin
[Dro82] Dromey RG (1982) How to solve it by computer. Prentice Hall, Englewood Cliffs
[Flo67] Floyd RW (1967) Assigning meanings to programs. In: Proceedings of the symposium on applied mathematics, American
Mathematical Society XIX:19–32
[FvG96] Feijen WHJ, van Gasteren AJM (1996) Programming, proving, and calculation. In: Neville Dean C, Hinchey MG (eds)
Teaching and learning formal methods. Academic Press, New York
[Gri81] Gries D (1981) The science of programming. Springer, Berlin
[GS93] Gries D, Schneider FB (1993) A logical approach to discrete math. Texts and Monographs in Computer Science. Springer,
Berlin
[Heh76] Hehner ECR (1976) DO considered OD: a contribution to the programming calculus. Technical Report CSRG-75, University
of Toronto, Computer Systems Research Group, Toronto
[Heh84] Hehner ECR (1984) Predicative programming. I, II. Commun ACM 27(2):134–143, 144–151
[Heh89] Hehner ECR (1989) Termination is timing. In: MPC: International conference on mathematics of program construction.
LNCS, Springer, Berlin
[Heh90] Hehner ECR (1990) A practical theory of programming. Sci Comput Program 14(2–3):133–158
[Heh04a] Hehner ECR (2004) From boolean algebra to unified algebra. MATHINT: The Mathematical Intelligencer 26
[Heh04b] Hehner ECR (2004) A practical theory of programming, 2nd edn. Springer, New York
[Heh05] Hehner ECR (2005) Specified blocks. In: Meyer B, Woodcock J (eds) VSTTE, volume 4171 of Lecture Notes in Computer
Science, pp 384–391. Springer, Berlin
[Heh06] Hehner ECR (2006) Retrospective and prospective for unifying theories of programming. In: Dunne S, Stoddart B (eds) UTP,
volume 4010 of Lecture Notes in Computer Science, pp 1–17. Springer, Berlin
[Heh07] Hehner ECR (2007) Unified algebra. Int J Math Sci (WASET) 1(1):20–37 (electronic)
[HJ87] Hoare CAR, Jifeng H (1987) The weakest prespecification. Inf Process Lett 24(2):127–132
[HJ98] Hoare CAR, Jifeng H (1998) Unifying theories of programming. Prentice Hall, London
[Hoa69] Hoare CAR (1969) An axiomatic basis for computer programming. Commun Assoc Comput Mach 12(10):576–583
[Hoa83] Hoare CAR (1983) An axiomatic basis for computer programming (reprint). Commun ACM 26(1):53–56
[Hoa84] Hoare CAR (1984) Programs are predicates. Philos Trans Roy Soc Lond Ser A 312(1522):475–489
[Hoa92] Hoare CAR (1992) Programs are predicates. In: Proceedings of the international conference on fifth generation computer
systems, pp 211–218, ICOT, Japan, 1992. Association for Computing Machinery
[Kal90] Kaldewaij A (1990) Programming: the derivation of algorithms. International Series in Computer Science. Prentice-Hall,
Englewood Cliffs
[Lif01] Lifschitz V (2001) On calculational proofs. Ann Pure Appl Logic 113(1–3):207–224
[Man74] Manna Z (1974) Mathematical theory of computation. McGraw-Hill, New York
[Man80] Manna Z (1980) Lectures on the logic of computer programming. Philadelphia, PA. With contributions by N. Dershowitz
and R. Waldinger
17. An elementary and unified approach to program correctness
[Mor90a] Morgan C (1990) Programming from specifications. Prentice Hall, Englewood Cliffs
[Mor90b] Morris JM (1990) Programs from specifications. In: Dijkstra EW (ed) Formal development of programs and proofs.
Addison-Wesley, Reading
[MR84] Martin AJ, Rem M (1984) A presentation of the fibonacci algorithm. IPL: Inf Process Lett 19
[vdS93] van de Snepscheut Jan LA (1993) What computing is all about. Texts and monographs in computer science. Springer, New-York
[vG90] van Gasteren AJM (1990) On the shape of mathematical arguments, volume 445 of Lecture Notes in Computer Science.
Springer, Berlin
[vGB98] van Gasteren AJM, Bijlsma A (1998) An extension of the program derivation format. In: Gries D, de Roever WP (eds)
PROCOMET, volume 125 of IFIP conference proceedings, pp 167–185. Chapman & Hall, London
[Wir71] Wirth N (1971) Program development by stepwise refinement. Commun ACM 14:221–227
Received 25 July 2007
Accepted in revised form 3 October 2009 by He Jifeng and Jim Woodcock