Unified Programming Theory

233 views

Published on

An elementary and unified approach to program correctness. Jaime A. Bohorquez V.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
233
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Unified Programming Theory

  1. 1. DOI 10.1007/s00165-009-0137-4BCS © 2009 Formal AspectsFormal Aspects of Computing of ComputingAn elementary and unified approachto program correctness ´Jaime A. Bohorquez VEscuela Colombiana de Ingenier´a, AK 45 # 205-59, ıBogot´ , Colombia. E-mail: jaime.bohorquez@escuelaing.edu.co aAbstract. We present through the algorithmic language DHL (Dijkstra-Hehner language), a practical approachto a simple first order theory based on calculational logic, unifying Hoare and Dijkstra’s iterative style of pro-gramming with Hehner’s recursive predicative programming theory, getting the “best of the two worlds” andwithout having to recur in any way to higher-order approaches such as predicate transformers, Hoare logic,fixed-point or relational theory.Keywords: Theory of programming, Programming methodology, Program correctness, Program verification,Algorithms, Recursion1. IntroductionExperienced programmers [Dij76, DF84, Gri81, Heh04b, Heh05, Bac03, Kal90, Mor90a, Dro82] have taughtus that rather than trying to verify the correctness of an arbitrary program once it has been written, it is easierand more effective to employ a methodology that makes the correctness argument a natural complement of aprogram. That is, the program and its proof of correctness developing as a whole. In fact, practical logic and algebraic calculi [DS90, Heh07, Lif01, Boh08] have been developed to ease formalsymbolization and reasoning about program specification and derivation (the accepted term for formal programdesign by refinements from specifications). However, on the one hand, the effective methods of Dijkstra’s schoolof program derivation to obtain ‘correct by construction’ programs from their specifications do not includethe treatment of recursive programs; and on the other hand, Hehner’s version of the ‘programs are predicates’theory of programming [Heh84, Heh90, Heh04b, Hoa84, HJ87] is based on “recursive refinement as a way ofcomposing programs, and a different way of generating [. . . ] looping constructs, including general recursion”[Heh06, Heh76]. Based on these considerations, and following the lead of Hoare and He Jifeng [HJ98] on their proposal of unify-ing theories of programming, our aim with this paper is unifying Hehner’s elegant and first-order based approachto specify and correctly design recursive programs known as ‘predicative programming’ [Heh04b, Heh90, Heh84]with Edsger Dijkstra’s fine and elaborate methodology [Hoa83, Dij75, Dij76] for verifying and deriving sequentialiterative programs from their specifications. Based on first order calculational predicate logic, we present a verysimple and logically elementary programming theory combining these two complementary approaches. ´Correspondence and offprint requests to: J. A. Bohorquez V, E-mail: jaime.bohorquez@escuelaing.edu.co
  2. 2. ´ J. A. Bohorquez V More precisely, we extend Dijkstra’s guarded command language [Dij75] to allow recursively defined functionsin the context of Hehner’s predicative programming theory [Heh84] through DHL (the Dijkstra-Hehner algo-rithmic language), define the concept of Hoare triples, give correctness conditions for general recursive programsand prove as theorems of this theory, the correctness axioms for all commands of this extension of Dijkstra’slanguage. The end result is a nice and simple theory of programming providing a smooth derivation method forrecursive as well as iterative sequential programs.Basic notational conventionsThroughout this text, a one-argument function application over a variable or a constant is denoted by an infixdot ‘.’. By true and false , we denote, besides the boolean values, the obvious 0-ary constant predicates. Thefollowing logical connectives are listed in order of decreasing binding power (those listed as a pair have the sameprecedence): ¬ denotes negation, denotes equality, ∨ and ∧ denote disjunction and conjunction respectively,⇒ and ⇐ denote implication and consequence respectively, ≡ and ≡ denote equivalence and discrepancy respec-tively. As usual, the symbols ∀ and ∃ denote the universal and the existential quantifier respectively; their scopeis delineated by a pair of angle brackets. Generally, a lower case greek letter such as σ , denotes a finite sequence of program variable identifiers, forinstance, σ [x , y, z , w ]. This way, σ denotes the same sequence that σ , except for the fact that each of itsidentifiers is decorated with an additional single quote to its right, that is, σ [x , y , z , w ]. We make similarconventions for other superscripts (for instance, σ , σ a , σ ). ˆ If S is a formula, expression S (x , y, z /x , y , z ) is obtained from S , by respectively replacing every occur-rence of variables x , y, z with values x , y , z . Similarly, the notation [S ] stands for S universally quantifiedover its free variables (bear in mind that in manipulations of [S ] we may be employing the properties of universalquantifiers). If σ [x , y, z ] and τ [w , p, q] are two finite sequences of values of the same length, the equality σ τabbreviates writing x w ∧ y p ∧ z q. Similarly, if S denotes an expression possibly mentioningvariables occurring in σ or σ , then a (meta)-expression such as ∀ σ :: ∃ σ :: S , is actually an abbreviationof ∀ x , y, z :: ∃ x , y , z :: S . Similarly, the expression S (σ/σ ) is an abbreviation of S (x , y, z /x , y , z ). The set of variables of a program determines its associated state space, its elements are called states, and arecharacterized by all the possible values of such variables. With the previous conventions applied to the same statespace, lower case greek letters may be interpreted as different states of the same computer behavior. In the rest of this note, we present the predicative programming theory and define a recursive version of theguarded command language on it (Sect. 2) and, interpret (Sect. 3) the concepts of Hoare triples, conditional andtotal correctness with respect to pre- and postconditions on that theory, besides defining and proving correctnessconditions for Dijkstra’s iterative command. Additionally, we present and justify two methods for proving totalcorrectness of general recursive programs. Finally, (Sect. 4) we illustrate the practice of program derivation inDHL by showing a derivation scheme to evaluate a family of (non-tail linear) recursive functions in terms of ado-loop construct.2. Hehner’s predicative programming theoryWe present and work with this very simple model of computation [Heh04b]. A specification is viewed as a bool-ean expression (logical formula) whose variables represent quantities of interest. A state of the execution of aprogram is a function defined on its variables, mapping each of them to a valid value according to its type. Givena computer behavior, the set of all states it can access is called its associate space state. When an initial state isprovided as input, an execution of a program computes a final state as output. To satisfy a specification, a computation must deliver a satisfactory final state. In other words, the giveninitial state and the computed final state must make the specification true. In this formalism, specifications aswell as programs are identified with logical predicates describing the input-output relations defined by the initialand final values of their respective variables. Verifications of specifications and programs correspond to simpleproofs about such predicates. Specifications, as well as programs, describe computer behavior or just executions.A program is interpreted by a computer by means of executions that transform the values of their variables. Wehave an implementation when the specification describes (is true of) each of its computations. To talk about computer behaviors described by a program, the following convention is adopted, if x denotesa program variable taking values in a given state space, x also (generically) denotes its initial value, and
  3. 3. An elementary and unified approach to program correctnessx denotes in general its final value if it exists. When we refer to more than one program execution on thesame state space, we could use additional notations to (generically) denote the initial and final values of x corre-sponding to those executions, by adding some other type of ‘decoration’ to the symbols denoting x such as x ,x , x i , x f . Usually, no confusion arises when distinguishing a variable from its initial value.ˆ2.1. SpecificationsWe look at specifications of computer behaviors as predicates on the initial values x , w , . . . and final valuesx , w , . . . of some variables x , w , . . . in a given state space. For example, suppose that x and w are two variables each of integer type; then, x x +1 ∧ w wspecifies the behavior of a computer increasing by 1 the value of x and leaving w unchanged. If we provide the input σ to an implementation of a specification S , the computer provides an output σ tosatisfy S . Therefore, for a specification to be implementable, there must be at least one satisfactory output foreach input: A specification S is called implementable if and only if ∀ σ :: ∃ σ :: S . In the same variables, here is a second specification: x > x.This specification is satisfied by a computation that increases x by any amount; it may leave w unchanged or maychange it to any integer. The first specification is deterministic since there is just one answer for each initial state,and the second is nondeterministic since there are several possible outputs for some initial states. At one end, we have the specification true ; it is the easiest specification to implement because every imple-mentation satisfies it. At the other end is the specification false , which is not satisfied by any implementation.But false is not the only unimplementable specification. Here is another: x ≥0 ∧ w 0.If the initial value of x is nonnegative, the specification can be satisfied by setting variable w to 0. But if the initialvalue of x negative, there is no way to satisfy the specification. Perhaps the specifier has no intention of providinga negative input, but to the programmer, every input is a possibility. The specification should have been x ≥0 ⇒ w 0. ( )For a nonnegative initial x , this specification still requires variable w to be assigned 0. If there is no intention ofproviding a negative value for x , then what would happen if we did provide it, has no importance whatsoever.That is precisely what this specification says: for negative x any result is satisfactory.Special specification notationsGiven L a logical specification language, if S , S0 , S1 , R are specifications on L, as well as boolean expressionsb0 , b1 ; and x , and e are, respectively, sequences (of equal length) of variables and expressions correspondinglytaking values of the same type, we define the following special notations for classes of specifications, inspired bysome of Dijkstra’s [Dij75] guarded commands:Immanence ok (σ σ )Parallel Assignment x: e ok (x /e)Composition S;R ∃ σ :: S (σ /σ ) ∧ R(σ/σ )Selection if b0 → S0 [] b1 → S1 fi b0 ∨ b1 ⇒ (b0 ∧ S0 ) ∨ (b1 ∧ S1 )In order to specify immanence or ‘no change’, we use ok to denote the identity relation between the initial andfinal state: it specifies that the final values of all variables equal the corresponding initial values. It is satisfied bya machine that does nothing. In the assignment notation, x and e are respectively (type compatible) sequencesof variables and expressions of the same length. For example, x, y : x +w , x which is equivalent to x x +w ∧ y x ∧ w w
  4. 4. ´ J. A. Bohorquez Vspecifies that the corresponding final values of x and y should be the sum of the initial values of x and w , andthe initial value of x ; the value w should be unchanged. The if . . . fi and semi-colon notations combine specifications to make a new specification. They apply to allspecifications, not just implementable specifications. They are just logical connectives, like ∧ and ∨. But due tothe requirement that at least one of its conditions must hold, they have the nice property that if their operandsare implementable, so is the result. The specification if b1 → S1 [] · · · [] bn → Sn fi can be implemented by a computer that behaves accordingto either one of Si (i : 1 ≤ i ≤ n), whenever the value of bi is true. The specification S ; R corresponds to therelational composition of S with R. It can be implemented by a computer that first behaves according to S , thenbehaves according to R , with the final values from S serving as initial values for R. It is therefore a sequentialcomposition.RefinementsIf R , S are specifications, we say that S is refined by R (or that R refines S ) and denote it with the expressionS R , if every computer behavior satisfying R also satisfies S . Formally, we define it like this: S R ≡ [S ⇐ R]The brackets notation [P ] , where P is a predicate, denotes the application of Dijkstra’s everywhere operator(universal closure) to P . The use of this operator in a expression like [ Q ⇒ R(x /e) ] , avoids having to refer tothe universal quantification on (the states) σ and σ . The above refinement simply means finding another specification that is everywhere equal or stronger. Inpractice, in order to prove that S is refined by R , due to the (predicate logic) generalization theorem, it ispossible to calculate inside the square brackets and simply prove S ⇐ R. Here are two examples: x >x (x x +1 ∧ w w) (x x +1 ∧ y y) x: x +1In each case, the left hand side is implied by the right hand side for all initial and final values of all variables.2.2. An algorithmic scheme for Dijkstra’s guarded commandsA program is a specification of computer behavior; it is therefore a predicate in the initial and final state. Not everyspecification is a program. A program is an “implemented” specification, one that the computer can execute. Tobe so, it must be written in a restricted notation. With respect to algorithmic or programming languages we define the following concepts related with functionsand expressions. We will call an expression primitive, if it is a term obtained by the composition of functions andconstants predefined in the given programming language. The concept of refinement allows programs to definenew functions:Definitions (User defined functions and implemented expressions) In the context of a programming language, anordinary (partial, possibly multi-valued) mathematical function f defined on a certain state space, is called user-defined if there is a program P (defined on the same state space) refining an assignment of the form x : f .s,where s is a sequence holding the arguments (expressions) for an evaluation of f , and x a variable (or sequence ofvariables) holding the result (value or sequence of values) of such evaluation. Recursive user-defined functions areallowed; that is, program P above, can include terms corresponding to evaluations of f . An expression is calledimplemented if it is a composition of primitive and user-defined functions admitted by the given programminglanguage. We proceed to define an algorithmic language DHL expressive enough to include or define all of Dijkstra’sguarded commands. We will not make explicit the primitive functions allowed by DHL.Definition 1 A program in the algorithmic language DHL is defined according to the following expressions:1. Immanence: ok is a program in DHL ; it is also known as skip.
  5. 5. An elementary and unified approach to program correctness2. Assignment: If x is any variable (or sequence of variables) and e is an implemented expression (or a sequence of implemented expressions of equal length and equal corresponding types to x ) in DHL, then x : e is a program in DHL.3. Composition: If P , Q are programs in DHL then P ; Q is a program in DHL.4. Selection: If b0 , . . . , bn−1 are boolean expressions implemented in DHL, and P0 , . . . , Pn−1 are programs in DHL, then if b0 → P0 [] · · · [] bn−1 → Pn−1 fi is also a program in DHL.5. Specified module: An implementable specification that is refined by a program in DHL is itself a program in DHL.As we already explained, in (2) and (4), it is not stated which expressions are implemented in DHL ; that setmay vary from one implementation to another. Part (2) includes the case in which the assignment has the formx : f .s where f is a user-defined function admitted by DHL. To execute this assignment, we just execute theprogram P (in DHL) refining it. The refinement acts as a procedure declaration; x : f .s acts as a procedurename, and P as the procedure body; in this case, the use of the assignment x : f .s acts as a call. Recursion isallowed; we may use assigments involving evaluations of f , in order to obtain program x : f .s. Part (5) statesthat any implementable specification S is a program in DHL if there is a program P in DHL such that S ⇐ Pis a theorem. To execute S , we just execute the program P (in DHL) refining it. Again, the refinement acts asa procedure declaration; S acts as a procedure name (or module label), and P as the procedure body; in thiscase, use of the name of (or label denoting) S acts as a call; since mentions of the name or label for S may occurwithin P , recursion is also allowed.Example 1 (x 0 ⇒ x > 0) x: x +1.Proof. (x 0 ⇒ x > 0) x: x +1≡ definition [x : x +1 ⇒ (x 0 ⇒ x > 0)]As we already pointed out, it is enough to calculate: x: x +1 ⇒ (x 0 ⇒ x > 0)≡ propositional logic x : x +1 ∧ x 0 ⇒ x > 0⇐ definition of “x : x +1” x x +1 ∧ x 0 ⇒ x > 0≡ arithmetic true Here is an example involving a recursive program expressed in DHL. Let x be a integer variable, the speci-fication x 0 says that the final value of variable x is zero. It becomes a program by refining it, which can bedone and written in many ways. This is one:Example 2 (A Recursive Program) x 0 if x 0 → ok [] x 0→x: x −1; x 0fi In standard predicate notations, this refinement is equivalent to ∀x , x :: x 0 ⇐ ((x 0∧x x ) ∨ (x 0 ∧ ∃x :: x x −1 ∧ x 0 ))which is easily proven. Observe that there is no guarantee of termination (in the case of a negative input) for thisprogram.3. Hoare and Dijkstra’s theoryIf a specification (or program) prg refines the one given in ( ), the expression (x ≥ 0 ⇒ w 0) prg coincideswith the following Hoare triple [Hoa83] x ≥ 0 {prg } w 0expressing the conditional correctness of the execution prescribed by prg with respect to precondition x ≥ 0 andpostcondition w 0.
  6. 6. ´ J. A. Bohorquez V3.1. Interpreting Hoare triplesIn general, Q {S } Rwhere S is an specification (or program in DHL), Q is a precondition and R a description of the result of itsexecution, may be interpreted as: “If assertion Q is true before initiation of a program S (at any state whatso-ever) then the assertion R will be true on its completion (if it does happen)”. Observe that Q {S } R must bean absolute (or constant) predicate, that is, it is true in every state or otherwise, false everywhere.Example 3 The triple x 0 {x : x + 1} x > 0 is therefore interpreted as: “x : x + 1 must fulfill specificationx 0 ⇒ x > 0”. (See example 1). Generally, Q {S } R translates into (Q ⇒ R ) S , where R is an expression obtained from R, by decoratingeach of its variables with an apostrophe. Notice that the implementability condition only ensures the existence of a final value for each initial one, butnothing is said about reaching this final value after a finite number of steps. Similarly, the conditions definingHoare notation are only demanded if the program execution halts.Proposition 1 If E is an expression defined in every state satisfying Q, then Q {x : E } R is equivalent to[ Q ⇒ R(x /E ) ].Proof. If α stands for the sequence of variables different from x of the associated state space, the followingcalculation will do Q {x : E } R≡ interpretation of Hoare triple (Q ⇒ R ) x: E≡ definition of refinement [x : E ⇒ (Q ⇒ R )]≡ definition of x : E [ok (x /E ) ⇒ (Q ⇒ R )]≡ predicate logic (one point rule) [(Q ⇒ R )(x , α /E , α)]≡ neither x nor α are free in Q [Q ⇒ R (x , α /E , α)]≡ substitution [Q ⇒ R(x /E )]Proposition 2 If E and F are expressions defined in every state, then expressions Q {x : E; y : F } R and[ Q ⇒ R(y / F )(x / E ) ] are equivalent.Proof. By Proposition 1, and assuming that x : E; y : F and x , y : E , F (x / E ) are equivalent specifica-tions, it is enough to show the following: Q {x : E ; y : F } R≡ assumption Q {x , y : E , F (x / E )} R≡ proposition 1 [Q ⇒ R(x , y / E , F (x / E ))]≡ textual substitution property [Q ⇒ R(y / F )(x / E )]We now prove our assumption. If γ denotes the sequence of variables different from x and y of the associatedstate space, we have x: E ; y: F≡ definition of assignment
  7. 7. An elementary and unified approach to program correctness ok (x / E ); ok (y / F )≡ definition of ok and substitution (x E ∧ y y ∧ γ γ ); (x x ∧ y F ∧ γ γ )≡ definition of composition ∃ x0 , y0 , γ0 :: x0 E ∧ y0 y ∧ γ0 γ ∧ x x0 ∧ y F (x , y, γ / x0 , y0 , γ0 ) ∧ γ γ0≡ Leibniz equality theorems ∃ x0 , y0 , γ0 :: x0 E ∧ y0 y ∧ γ0 γ ∧ x E ∧ y F (x / E ) ∧ γ γ≡ predicate calculus ∃ x0 , y0 , γ0 :: x0 E ∧ y0 y ∧ γ0 γ ∧ x E ∧ y F (x / E ) ∧ γ γ≡ predicate logic (one point rule) x E ∧ y F (x / E ) ∧ γ γ≡ definition of ok ok (x , y / E , F (x / E ))≡ definition of assignment x , y : E , F (x / E ))Example 4 We show true {if x ≤ y → skip [] x > y → x , y : y, x fi } x ≤ y.Proof. This statement, denotes the refinement x ≤y if x ≤ y → skip [] x > y → x , y : y, x fihence, the following calculation should suffice: if x ≤ y → skip [] x > y → x , y : y, x fi ⇒ x ≤ y≡ definition of if . . . fi ; tricotomy of ≤ (x ≤ y ∧ ok ) ∨ (x > y ∧ x , y : y, x ) ⇒ x ≤ y≡ propositional calculus: case separation (x ≤ y ∧ ok ⇒ x ≤ y ) ∧ (x > y ∧ x , y : y, x ⇒ x ≤ y )⇐ definitions of ‘ok ’ and ‘: ’ (x ≤ y ∧ ok ⇒ x ≤ y) ∧ (x > y ∧ x y ∧ y x ⇒ x ≤ y )≡ arithmetic true Proposition 1 gives a necessary and sufficient condition for the conditional correctness of an assignment.Now, for the case of the conditional command, we give a sufficient condition. [Q ⇒ b0 ∨ b1 ] ∧ (Q ∧ b0 ){S0 }R ∧ (Q ∧ b1 ){S1 }R≡ definition of refinement and Hoare triples [Q ⇒ b0 ∨ b1 ] ∧ [(Q ∧ b0 ⇒ R ) ⇐ S0 ] ∧ [(Q ∧ b1 ⇒ R ) ⇐ S1 ]≡ propositional logic [Q ⇒ b0 ∨ b1 ] ∧ [Q ∧ b0 ∧ S0 ⇒ R ] ∧ [Q ∧ b1 ∧ S1 ⇒ R ]≡ predicate calculus [(Q ⇒ b0 ∨ b1 ) ∧ (Q ∧ ((b0 ∧ S0 ) ∨ (b1 ∧ S1 )) ⇒ R )]⇒ strengthening of antecedent [(Q ⇒ b0 ∨ b1 ) ∧ (Q ∧ (b0 ∨ b1 ) ∧ ((b0 ∧ S0 ) ∨ (b1 ∧ S1 )) ⇒ R )]≡ predicate calculus [(Q ⇒ b0 ∨ b1 ) ∧ (Q ∧ (b0 ∨ b1 ⇒ (b0 ∧ S0 ) ∨ (b1 ∧ S1 )) ⇒ R )]⇒ propositional logic [(b0 ∨ b1 ⇒ (b0 ∧ S0 ) ∨ (b1 ∧ S1 )) ⇒ (Q ⇒ R )]≡ definition of if . . . fi [(Q ⇒ R ) ⇐ if b0 → S0 [] b1 → S1 fi ]≡ refinement notation (Q ⇒ R ) if b0 → S0 [] b1 → S1 fi≡ interpreting as a Hoare triple Q {if b0 → S0 [] b1 → S1 fi} R We have proved the following proposition:
  8. 8. ´ J. A. Bohorquez VProposition 3 If b0 , b1 are conditions (boolean expressions) defined in every state satisfying Q, then Q {if b0 →S0 [] b1 → S1 fi} R, whenever [Q ⇒ b0 ∨ b1 ] ∧ (Q ∧ b0 ){S0 }R ∧ (Q ∧ b1 ){S1 }R.TerminationSo far, we have talked only about the result of a computation, not about how long it takes. Actually, among thebasic expressions defining a program in DHL, there is no problem in practice with the termination of its execu-tions, except for (see example 2) the ones making recursive calls, since it is not possible to rule out executionsproducing infinite chains of such calls. Consider a partial recursively defined function g, on a given space state. Let us say p.σ, if a.σ g.σ (0) h(g, Y .σ ), if b.σwhere function p and conditions a, b are implementable in DHL and σ symbolizes the input values (initial state)of g. On initial state σ , conditions a and b correspond respectively to all non-recursive and all recursive casesof its domain; p.σ (depending on σ ) is the final value given by g in the non recursive cases, and h(g, Y .σ ) isan expression implemented in DHL and involving at least one recursive invocation of g on a set of expressionsY .σ , depending on the initial state σ . The set Y .σ contains all values on which g recurs including the valuesof nested-inside recursive calls of g. This set might be just a singleton; of course, it is empty when condition babove is identically false as a predicate (b ≡ false).Definition 2 We call a function g implemented in DHL, well defined on a predicate C (on the associated statespace) when one of the following conditions holds:1. g is a primitive function of DHL defined on C ,2. g is a function as described above in (0), with functions p, a, b and h well defined on C , and either condition b is identically false, or there exists a well founded relation1 ≺ on the domain of g such that2 (a) C ⇒ dom.g (b) dom.g ≡ a ∨ b (c) a ⇒ dom.p (d) a ∧ b ≡ false (e) [ b.σ ∧ ζ ∈ Y .σ ⇒ ζ ≺ σ ]. That is, g is defined on every state fulfilling C ; a and b are disjunctive conditions that jointly cover all states belonging to the domain of g ; furthermore, every evaluation of g on a state σ , (for which condition b holds) recurs on states (on the domain of g) ≺-smaller than σ . We say that an expression is well defined if it is a composition of well defined functions. Given a program S in DHL and a predicate C , we say that S halts on (initial condition) C , if every executionof S initiating in a state satisfying C , must terminate. The formal definition of this concept follows.Definition 3 A program S in DHL halts on condition C if one of the following cases holds: (i) S skip (ii) S (x : e) where x is any variable (or sequence of variables) and e is an implemented and well defined expression on C (or a sequence of such expressions, type compatible and of equal length to x ).(iii) S if b0 → P0 [] . . . [] bn−1 → P0 fi, where b0 . . . bn−1 are implemented and well defined boolean expres- sions on C , condition C implies that at least one of bi ’s holds (0 ≤ i < n), and programs P0 . . . Pn−1 terminate on C .(iv) S P ; Q, and programs P and Q terminate respectively, on C and on C ; P ,1 A pair (A, ≺) is called a well founded set, if ≺ is a well founded relation on the set A. A binary relation ≺, on a set A, is well founded ifevery nonempty subset of A has a ≺-minimal element.2 We will use the unary predicate dom.f to represent in predicate form, state membership to the domain of a function f .
  9. 9. An elementary and unified approach to program correctness (v) S is both, logically equivalent to x : g.σ (where g is a function given by by scheme (0) and well defined on C ), and also, refined by program if a.σ → P [] b.σ → H (S ) fi where P is a program refining x : p.σ , and H (S ) a program refining x : h(g, Y .σ ).In order to ensure the termination of every possible execution of a recursive program prog in DHL (which basi-cally would be given in terms of an assignment of the form x : g.σ such as in definition 3(ii)) we will associate afunction (called ‘size’ for the time being) with it. Function ‘size’ would be defined on the state space associatedto prog, taking values on a well founded set, and its value in any state of an execution of prog would decrease interms of the associated well founded relation, every time a recursive call is made.Total correctnessWe say that a program (or specification) S is totally correct with respect to a precondition Q and a postconditionR written {Q} S {R},if besides being conditionally correct, every execution initiating in a state satisfying Q halts.Remark Definition 3 guarantees that Propositions 1, 2 and 3 continue being valid if we replace conditional cor-rectness by total correctness in their statements about Hoare triples. For instance, the statement of proposition3 becomes {Q} if b0 → S0 [] b1 → S1 if {R} whenever [Q ⇒ b0 ∨ b1 ] ∧ {Q ∧ b0 }S0 {R} ∧ {Q ∧ b1 }S1 {R}.3.2. Verifying general recursive programsAs pointed out before, a specification S with S ≡ (x : g.σ ) where g a well defined function given by scheme (0)becomes a program in DHL, and g a user-defined function, through a refinement such as S if a.σ → P [] b.σ → H (S ) if (1)where P is a program refining x : p.σ , and H (S ) a program refining x : h(g, Y .σ ). The next proposition tacitly makes use of an inductive theorem on the correctness of general recursive pro-grams that we developed in [Boh07].Proposition 4 Consider a program S ≡ (x : g.σ ) with g a well defined function by a user of DHL throughscheme (0) and refinement (1) then, in order to verify {Q} x : g.σ {R},since S ’s termination depends essentially on g being well defined, it is enough1. to prove [ Q ⇒ R(x /g.σ ) ] by proposition 1, which can be done by structural induction on the definition of g ; or,2. applying proposition 3 and (1), to show by structural induction that (i) [ Q ⇒ b ∨ c ] (ii) {Q ∧ b} S {R} (correctness of basic cases) (iii) {Q ∧ c} H (S ) {R} (correctness of recursive cases) using as an inductive hypothesis the correctness of all recursive calls, that is, {Q(σ/ζ )} x : g.ζ {R(σ/ζ )} for all ζ ∈ Y .σ . If the (recursive) mathematical definition of function g is directly available, the first option is easier andpractical; otherwise, if one only counts with an indirect definition of g given through an implementation like theone given in (1), the second choice is the only viable option. Next example informally introduces syntax for declaring user-defined functions in DHL.
  10. 10. ´ J. A. Bohorquez VExample 5 Consider recursive program pf given by the following code:| [ fun pf (x : int) ret r : int if x ≥ 0 → r : x [] x < 0 → x : x + 1; r : pf (x ) fi {R : (x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0)} ret r ] | this program implements the recursive function f : Z → Z defined as x if x ≥ 0 f .x f (x +1) if x < 0f is well defined since its defining cases are disjoint and cover all possibilities for an integer x , besides this,x +1 ≺ x for x < 0, if relation x ≺ y ≡ (y < x < 0) ∨ (x ≥ 0 ∧ y < 0), is defined for integers x and y. Relation≺ is well founded. The function ‘size’ mentioned previously to ensure termination, is in this case, the identityfunction on Z. It is easy to check that specification r : f .x is refined by program pf . We verify {true} r : f .x {R}, i.e. {true} r : f .x {(x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0)}.To do it, we show by induction on the definition of f , ((x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0))(r /f .x )that is, (x ≥ 0 ∧ f .x x ) ∨ (x < 0 ∧ f .x 0)let us see,• Case x ≥ 0 : x ≥ 0 ∧ f .x x ≡ hypothesis: x ≥ 0 ; definition of f x x• Case x < 0 : x < 0 ∧ f .x 0 ⇐ definition of f (x −1 ∧ f (x +1) 0) ∨ (x < −1 ∧ f (x +1) 0) ⇐ definition of f (x +1 ≥ 0 ∧ f (x +1) x +1) ∨ (x +1 < 0 ∧ f (x +1) 0) ≡ substitution ((x ≥ 0 ∧ f .x x ) ∨ (x < 0 ∧ f .x 0))(x /x +1) ≡ induction hipothesis: x +1 ≺ x trueOption (b) will allow us to use program pf implementing function f , to prove {true} r : f .x {R}. Thus, itsuffices to show (i) [x ≥ 0 ∨ x < 0] (ii) {x ≥ 0} r : x {(x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0)}(iii) {x < 0} x : x +1; r : pf (x ) {(x ≥ 0 ∧ r x ) ∨ (x < 0 ∧ r 0)} as induction hypothesis, we use: {true} r : pf (x +1) {(x +1 ≥ 0 ∧ r x +1) ∨ (x +1 < 0 ∧ r 0)}Proof. Firstly, (i) is a simple theorem of arithmetic. To prove (ii), it will do to show x ≥ 0 ⇒ (x ≥ 0 ∧ x x ), atrivial result. The proof of (iii) reduces to prove x < 0 ⇒ pf (x +1) 0, under hypothesis (x +1 ≥ 0 ∧ pf (x +1) x +1) ∨ (x +1 < 0 ∧ pf (x +1) 0)which is evident if x < 0 is separated into two cases: x −1 and x < −1.
  11. 11. An elementary and unified approach to program correctness3.3. IterationsIn the context of this theory, given b a boolean expression and Q a program, the iterative program: do b → Q odis seen as the implementation (cic, for short) of the evaluation of a partial function f recursively defined. In order to explain this calculation, suppose for a moment that Q is a deterministic program; hence, to everyinitial state σ of a terminating execution of Q, we can associate its corresponding final value σ ∗ . In this way, itis possible to regard program Q as a transformation function σ → σ ∗ . Therefore, program cic is realized as follows: cic if ¬b → ok [] b → Q ; cic fi, (2)corresponding to an implementation of specificationσ : f .σ where f is a partial function defined on its domain: σ if ¬b.σ f .σ (3) f .σ ∗ if b.σThat is, σ : f .σ cic. (4)As we have pointed out before, in order for f be well defined, the existence of a well founded relation ≺ on itsdomain, is needed, in such way that for every state σ satisfying property b, necessarily σ ∗ ≺ σ . With this lastcondition in mind, it is possible to suppress the assumption about program Q being deterministic, if we think ofa fixed execution of cic ; since it will be impossible for it to access the same state more than once. In other words,each execution of cic determines its corresponding (well defined) function f . The fact that cic implements specification σ : f .σ causes {P } cic {R}to be translated into (P ⇒ R(σ/f .σ )) cic (5)that is to say, σ , the corresponding final state, coincides with f .σ . Here, σ represents the complete collectionof variables shaping an initial state of any execution of cic. The following proposition gives sufficient conditions to ensure total correctness of an iteration.Proposition 5 {P } do b → Q od {R} whenever (i) [P ∧ ¬b ⇒ R] (ii) {P ∧ b} Q {P } and(iii) There exists a well founded relation ≺ on the state space, such that {P ∧ b ∧ σ K } Q {σ ≺ K } where K represents a constant state value.Proof. Given a fixed execution of cic do b → Q od, we have seen that there is a partial function f recursivelydefined according to (3), in such a way that σ : f .σ cic ; besides this, cic answers to the description given in(2), and Q corresponds to the assignment σ : σ ∗ . As a consequence of what we just have said, condition (iii) translates into condition{P ∧ b ∧ σ K } σ : σ ∗ {σ ≺ K }, but then, {P ∧ b ∧ σ K } σ : σ ∗ {σ ≺ K }≡ Proposition 1 P ∧ b ∧ σ K ⇒ σ ∗≺ K≡ Leibniz’s Principle P ∧ b ∧ σ K ⇒ σ ∗≺ σ,this ensures that f is a well defined function with domain the set of states fulfilling P , and due to (4) and (3), wehave that every execution of cic starting in a state satisfying P , halts.
  12. 12. ´ J. A. Bohorquez V It only remains to prove the conditional correctness of cic. To do this we define R ≡ R(σ/f .σ ), due to theargument given in (5), the following calculation suffices: (P ⇒ R ) ⇐ (if ¬b → ok [] b → Q; cic fi)≡ definition of selection ; propositional calculus (P ⇒ R ) ⇐ (¬b ∧ ok ) ∨ (b ∧ (Q; cic))≡ propositional calculus (P ∧ ¬b ∧ ok ⇒ R ) ∧ (P ∧ b ∧ (Q ; cic) ⇒ R )≡ definition of ok (σ f .σ ), Leibniz’s principle ; definition of composition (P ∧ ¬b ⇒ R) ∧ (P ∧ b ∧ Q ∧ cic(σ/σ ∗ ) ⇒ R ))⇐ (i) and (ii) ; (ii) equivalent to (P ∧ b ⇒ P (σ/σ ∗ )) Q P ∧ b ∧ (P ∧ b ⇒ P (σ/σ ∗ )) ∧ cic(σ/σ ∗ ) ⇒ R⇐ induction hypothesis: (P ⇒ R )(σ/σ ∗ ) cic(σ/σ ∗ ) P ∧ b ∧ P (σ/σ ∗ ) ∧ (P (σ/σ ∗ ) ⇒ R (σ/σ ∗ )) ⇒ R⇐ propositional calculus b ∧ R (σ/σ ∗ ) ⇒ R≡ R ≡ R(σ/f .σ ) b ∧ R(σ/f .σ )(σ/σ ∗ ) ⇒ R(σ/f .σ )≡ substitution b ∧ R(σ/f .σ ∗ ) ⇒ R(σ/f .σ )≡ definition of f : b ⇒ f .σ f .σ ∗ true As we have just seen, in this theory, iteration is defined as a particular type of recursion, tail recursion3 to beprecise. To conclude this section, we show one last example illustrating that our verifying method encompasses notjust linear recursion (at most one recursive call for case), but any general recursive scheme.Example 6 (McCarthy’s 91 function) This function is defined for integer x by x − 10, if x > 100 g.x (6) g(g(x +11)), if x ≤ 100.Except for condition (2.e) in definition 2, it is easy to check that g fulfills all conditions for being well defined. Inorder to prove this remaining condition, we define the following partial order (noted ‘ ’) on Z : x y ≡ y ≤ 100 ∧ y < x , for all integers x and y. Observe then, that is a well founded relation such that if x > 100 then x is -minimal, x +11 x if x ≤ 100 and g(x +11) x if x ≤ 100.This last fact may be proven by ordinary mathematical induction via its equivalence with proposition: g(x +11) > xif x ≤ 100 . Now, function g and specification r : g.x become respectively, a user defined function, and a program inDHL, through the refinement r : g.x if x > 100 → r : x −10 [] x ≤ 100 → r : g(g(x +11)) fibesides this, the fact that g is well defined guarantees the termination of program r : g.x for any initial state inwhich x takes an integer value. We want to verify the following correctness statement: {true} r : g.x {r f .x }where f .x if x > 101 → x −10 [] x ≤ 101 → 91 fi.3 Tail recursion is a recursive scheme having just one direct (such as the one in ((3))) recursive call.
  13. 13. An elementary and unified approach to program correctness Observe that {true} r : g.x {r f .x }⇐ option (a) in proposition 4 ; g well defined [g.x f .x ] ;thus, it is sufficient to prove the equality of f and g as functions defined on Z. Since g.101 101−10 91 f .101,by definitions of f and g, it only remains to show that g.x f .x for x ≤ 100. g.x Definition of g ; x ≤ 100 g(g(x +11)) x +11 x ; inductive hypothesis g(f (x +11)) Definition of f ; x ≤ 100 g(x +1) if 100 < x +11 ≤ 111 g(91) if x < 90 x < 90 ⇒ 91 x ; x ≤ 100 ⇒ x +1 x ; inductive hypothesis f (x +1) if 90 < x +1 ≤ 101 f (91) if x < 90 Definition of f 91 Definition of f ; x ≤ 100 f .x4. Program derivation in DHLFor the practical aspects of program derivation in DHL, we can take advantage of both the methods of Dijkstra’sschool of derivation [DF84, Kal90, Bac03] and Hehner’s specified blocks approach [Heh05] to obtain a smoothand unified derivation style based on the calculative approach to formal reasoning [Dij94, DS90, GS93, Heh04a,Heh07]. The following example shows a derivation scheme to code a family of non-tail linear recursive functions interms of tail recursive (or do-loop) programs.Example 7 (A Derivation Scheme) Suppose we want to design a program to calculate, given data N , the valueF .N where F is a well defined function given by the following recursive definition: m.x , if b.x F .x (7) h.x ⊕ F (g.x ), if ¬b.xwhere functions m, h and g, condition b, and a binary, associative and commutative operation ⊕ with identityelement e, are implementable in DHL, and there is also, a well founded relation ≺ defined on the domain of Fsuch that1. x is ≺-minimal ⇒ b.x , and2. ¬b.x ⇒ g.x ≺ x .We may specify this problem (using the label Spc) as follows: Spc : r F .N or equivalently, {true} Spc {r F .N }Inspired by the ‘tail invariant’ method [Kal90], and the shape of (7), we propose to refine Spc in terms of sequencingtwo new specifications, Inic and Conc : Spc Inic ; Concwhere Conc : r r0 ⊕ F .x0and Inic : r0 e ∧ x0 N . We decorate with subindex 0 those values of variables in the final state of Inic and the initial state of Conc.
  14. 14. ´ J. A. Bohorquez V Clearly, Inic ∧ Conc ⇒ Spc. We now proceed to analyze the two cases for the value of condition b in (7): For the case b.x0 we have Conc ∧ b.x0 ≡ definition r r0 ⊕ F .x0 ∧ b.x0 ⇒ definition of F r r0 ⊕ m.x0 ≡ Let r1 r0 ⊕ m.x0 r r1 . Suppose now ¬b.x0 . Conc ≡ definition r r0 ⊕ F .x0 ≡ definition of F and assumption r (r0 ⊕ h.x0 ) ⊕ F (g.x0 ) ≡ Let r1 r0 ⊕ h.x0 and x1 g.x0 r r1 ⊕ F .x1 ≡ notation Conc1Conc1 is obtained from Conc, by redecorating its free variables having subindex 0 with subindex 1. We can easily refine Inic as follows Inic x, r : N,eThe previous calculations allow us to refine Conc like thisConc if b.x → r : r ⊕ m.x [] ¬b.x → x , r : g.x , r ⊕ h.x ; Conc fiwhich in terms of a do-od cycle is equivalent toConc do ¬b.x → x , r : g.x , r ⊕ h.x od ; r : r ⊕ m.x5. Previous and related workThe idea of describing programs and algorithms with boolean expressions (initially called predicate assertions),and proving properties about programs initiates with the seminal work of R.W. Floyd and C.A.R. Hoare. In hispaper [Flo67], Floyd attached logical assertions to the arcs of flowcharts, the common early way of expressingalgorithms through a graphlike notation, in order to reason about program correctness with respect to specifi-cations. Hoare was the first to explore the logical foundations of computer programming [Hoa69] by proposingsets of axioms and rules of inference especially devised to prove properties of computer programs. Following Floyd’s contribution, Z. Manna [Man74, Man80] applied this new concept of program correctnessand logical relations between program flowcharts and its specifications mainly to the problem of program verifi-cation which is concerned with proving (or disproving) the correctness of intended algorithms with respect to aformal specification. E.W. Dijkstra [Dij68, Dij75] subsequently extended and refined Hoare’s proposal through the concept of‘predicate transformers’. Based on this theory, he developed a formal calculus [Dij76] for the derivation of pro-grams written in a very simple language that gave linear form (in contrast with flowcharts) to non-deterministicsequential programs by statements encapsulating ‘conditional branching’ which he expressed through his so calledguarded commands. This calculus together with his ‘calculative style’ of formally proving theorems [DS90, GS93]rendered an elegant and practical program derivation methodology that attracted many adherents and practitio-ners, among them W. Feijen, A. J. M. van Gasteren [vG90, FvG96], David Gries [Gri81], R. Backhouse [Bac03],
  15. 15. An elementary and unified approach to program correctnessA. Bijlsma [vGB98], J. L. A. van de Snepscheut [vdS93], A. J. Martin, M. Rem [MR84], A. Kaldewaij [Kal90],M. Fokkinga [BF01], and D. Michaelis [BM06]. Soon after Dijkstra’s publication of [Dij76], Eric Hehner proposed to “stop thinking of programs as mere text,and start thinking of them as mathematical expressions in their own right. . . . like any mathematical expression,a program can stand for its meaning all by itself” [Heh76, Heh06]. Although, Niklaus Wirth [Wir71] was thefirst to propose refining programs from specifications as an orderly way of designing them, a formal approachto stepwise refinement for program construction from specifications appears for the first time, in the form oftwo ‘schools of program refinement’ that we could respectively name as the predicative and the transformativeapproach. Formal program refinement is an approach that views programming as a process of transforming aspecification into a program according to mathematical laws ensuring that correctness is preserved at each of itssteps. The transformative approach to formal program refinement initiates with Back’s [Bac78, Bac80] applica-tion of Hehner’s proposal to develop his “Refinement Calculus” which enlarges Dijkstra’s notation of guardedcommands generalising his concept of ‘predicate transformers’ to admit specifications. C. Morgan [Mor90a]J. M. Morris [Mor90b] independently made further contributions to the development of this approach to pro-gram refinement. What we call predicative approach to formal program refinement corresponds to the use by Hehner and Hoareof the phrase “programs are predicates” as a motto [Hoa84, Hoa92, Heh89]. Hehner’s ideas about thinking ofprograms as logical formulas inspired two parallel but very similar theories of programming: Predicative pro-gramming (Hehner’s version of the ‘programs are predicates’ theory), and Unifying theories of programming dueto Hoare and He Jifeng. Predicative programming [Heh04b] is a very simple and practical theory based on first order logic that helpswith the practical aspects of program specification, design and correctness. Its basic assumptions avoid manycomplications that come from other approaches based on more complex logics. Hoare’s version of the “programs are predicates” theory of programming [Hoa84, HJ87] was developed incolaboration with He Jifeng and based on Hehner’s approach [Heh84], in the context of a wide ranging scien-tific theory: Unifying theories of programming (UTP for short) [HJ98] which shows how denotational semantics,operational semantics and algebraic semantics can be combined in a unified framework for the formal spec-ification, design and implementation of programs and computer systems. This framework, aimed to exposethe mathematical laws underlying a general theory of programming, is based on the calculus of Tarski’s the-ory of relations enriched with his fixed point theory and applied to Dijkstra’s non-deterministic programminglanguage. Using the fixed point theory of the UTP framework, we proved [Boh07] that simple regularity conditions onrecursively defined functions allow one to prove correctness of general recursive programs by induction on thevalues of their domain (appying induction hypotheses exactly on the values where the functions recur). However,this kind of structural induction (on syntax) is a common metalogical technique in first order logic that fits wellwith predicative programming theory. More generally, predicative programming is an ample framework for expressing programming theories thatfavors the use of specifications rather than assertions (logical formulas that are intended to be true wheneverexecution passes the point in the program where they are located). In fact, in an analogous way as we expressedDijkstra’s programming formalism, it would be possible to embed Manna’s correctness theory for program flow-charts [Man80] in the predicative programming theory [Heh05] in terms of ‘guarded assigments’ and unstructuredgoto’s (representing the arcs of the flowchart) joining cut points (the nodes of the flowchart) and, similarly proveits corresponding correctness theorems.6. ConclusionsWe have shown that it is possible to develop a simple and practical theory of sequential imperative programcorrectness unifying both iterative and recursive commands. Our assertion about the simplicity of this theoryis justified by the fact that it is expressed in first order logic in contrast with the usual approaches requiringhigher order logic. Actually, structural induction, usual in the study of first order logic, is the only metalogicaldevice we use. The proof power of Dijkstra’s calculative logic allows us to call this approach practical. It remainsfor future work, to compare the methodology proposed with the existent software-based program verificationmethods.
  16. 16. ´ J. A. Bohorquez VAcknowledgementsThe author thanks Eric Hehner, Lex Bijlsma, and the anonymous referee for their suggestions to improve thepresentation of this paper. He also thanks Jorge Villalobos for his advice about writing and stylistic mattersconcerning this note.References[Bac78] Back RJR (1978) On the Correctness of Refinement Steps in Program Development. PhD thesis, University of Helsinki. Also available as report A-1978-5[Bac80] Back RJR (1980) Correctness preserving program refinements: proof theory and applications, volume 131 of Mathematical Center Tracts. Mathematical Centre, Amsterdam[Bac03] Backhouse R (2003) Program Construction: Calculating Implementations from Specifications. Wiley, New York[BF01] Backhouse R, Fokkinga M (2001) The associativity of equivalence and the towers of Hanoi problem. Inf Process Lett 77(2–4):71–76[BM06] Backhouse R, Michaelis D (2006) Exercises in quantifier manipulation. In: Uustalu T (ed) MPC, volume 4014 of Lecture Notes in Computer Science, pp 69–81. Springer, Berlin[Boh07] ´ Bohorquez JA (2007) An inductive theorem on the correctness of general recursive programs. Logic Journal of the IGPL 15(5–6):373–399[Boh08] ´ Bohorquez JA (2008) Intuitionistic logic according to Dijkstra’s calculus of equational deduction. Notre Dame J Form Log 49(4):361–384[DF84] Dijkstra EW, Feijen WHJ (1988) Een Methode van Programmeren. Academic Service, Den Haag, 1984. Also available as A Method of Programming. Addison-Wesley, Reading[Dij68] Dijkstra EW (1968) Go to statement considered harmful. Commun ACM 11(3):147–148[Dij75] Dijkstra EW (1975) Guarded commands, nondeterminacy and formal derivation of programs. Commun ACM 18(8):453–457[Dij76] Dijkstra EW (1976) A discipline of programming. Prentice-Hall Inc., Englewood Cliffs. With a foreword by C.A.R. Hoare, Prentice-Hall Series in Automatic Computation[Dij94] Dijkstra EW (1994) How computing science created a new mathematical style. EWD 1073 in The writings of Edsger W. Dijkstra, 2000. http://www.cs.utexas.edu/users/EWD[DS90] Dijkstra EW, Scholten CS (1990) Predicate calculus and program semantics. Springer, Berlin[Dro82] Dromey RG (1982) How to solve it by computer. Prentice Hall, Englewood Cliffs[Flo67] Floyd RW (1967) Assigning meanings to programs. In: Proceedings of the symposium on applied mathematics, American Mathematical Society XIX:19–32[FvG96] Feijen WHJ, van Gasteren AJM (1996) Programming, proving, and calculation. In: Neville Dean C, Hinchey MG (eds) Teaching and learning formal methods. Academic Press, New York[Gri81] Gries D (1981) The science of programming. Springer, Berlin[GS93] Gries D, Schneider FB (1993) A logical approach to discrete math. Texts and Monographs in Computer Science. Springer, Berlin[Heh76] Hehner ECR (1976) DO considered OD: a contribution to the programming calculus. Technical Report CSRG-75, University of Toronto, Computer Systems Research Group, Toronto[Heh84] Hehner ECR (1984) Predicative programming. I, II. Commun ACM 27(2):134–143, 144–151[Heh89] Hehner ECR (1989) Termination is timing. In: MPC: International conference on mathematics of program construction. LNCS, Springer, Berlin[Heh90] Hehner ECR (1990) A practical theory of programming. Sci Comput Program 14(2–3):133–158[Heh04a] Hehner ECR (2004) From boolean algebra to unified algebra. MATHINT: The Mathematical Intelligencer 26[Heh04b] Hehner ECR (2004) A practical theory of programming, 2nd edn. Springer, New York[Heh05] Hehner ECR (2005) Specified blocks. In: Meyer B, Woodcock J (eds) VSTTE, volume 4171 of Lecture Notes in Computer Science, pp 384–391. Springer, Berlin[Heh06] Hehner ECR (2006) Retrospective and prospective for unifying theories of programming. In: Dunne S, Stoddart B (eds) UTP, volume 4010 of Lecture Notes in Computer Science, pp 1–17. Springer, Berlin[Heh07] Hehner ECR (2007) Unified algebra. Int J Math Sci (WASET) 1(1):20–37 (electronic)[HJ87] Hoare CAR, Jifeng H (1987) The weakest prespecification. Inf Process Lett 24(2):127–132[HJ98] Hoare CAR, Jifeng H (1998) Unifying theories of programming. Prentice Hall, London[Hoa69] Hoare CAR (1969) An axiomatic basis for computer programming. Commun Assoc Comput Mach 12(10):576–583[Hoa83] Hoare CAR (1983) An axiomatic basis for computer programming (reprint). Commun ACM 26(1):53–56[Hoa84] Hoare CAR (1984) Programs are predicates. Philos Trans Roy Soc Lond Ser A 312(1522):475–489[Hoa92] Hoare CAR (1992) Programs are predicates. In: Proceedings of the international conference on fifth generation computer systems, pp 211–218, ICOT, Japan, 1992. Association for Computing Machinery[Kal90] Kaldewaij A (1990) Programming: the derivation of algorithms. International Series in Computer Science. Prentice-Hall, Englewood Cliffs[Lif01] Lifschitz V (2001) On calculational proofs. Ann Pure Appl Logic 113(1–3):207–224[Man74] Manna Z (1974) Mathematical theory of computation. McGraw-Hill, New York[Man80] Manna Z (1980) Lectures on the logic of computer programming. Philadelphia, PA. With contributions by N. Dershowitz and R. Waldinger
  17. 17. An elementary and unified approach to program correctness[Mor90a] Morgan C (1990) Programming from specifications. Prentice Hall, Englewood Cliffs[Mor90b] Morris JM (1990) Programs from specifications. In: Dijkstra EW (ed) Formal development of programs and proofs. Addison-Wesley, Reading[MR84] Martin AJ, Rem M (1984) A presentation of the fibonacci algorithm. IPL: Inf Process Lett 19[vdS93] van de Snepscheut Jan LA (1993) What computing is all about. Texts and monographs in computer science. Springer, New-York[vG90] van Gasteren AJM (1990) On the shape of mathematical arguments, volume 445 of Lecture Notes in Computer Science. Springer, Berlin[vGB98] van Gasteren AJM, Bijlsma A (1998) An extension of the program derivation format. In: Gries D, de Roever WP (eds) PROCOMET, volume 125 of IFIP conference proceedings, pp 167–185. Chapman & Hall, London[Wir71] Wirth N (1971) Program development by stepwise refinement. Commun ACM 14:221–227Received 25 July 2007Accepted in revised form 3 October 2009 by He Jifeng and Jim Woodcock

×