Successfully reported this slideshow.
Upcoming SlideShare
×

# Program Derivation of Matrix Operations in GF

322 views

Published on

The original version of my undergraduate research presentation that I was graded on (I got an A, but this version is certainly inferior to the later version of the presentation, by which time I also had better insight into my results).

Published in: Technology
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

### Program Derivation of Matrix Operations in GF

1. 1. Program Derivation of Matrix Operations in GF Charles Southerland Dr. Anita Walker East Central University
2. 2. The Galois Field ● A finite field is a finite set and two operators (analogous to addition and multiplication) over which certain properties hold. ● An important finding in Abstract Algebra is that all finite fields of the same order are isomorphic. ● A finite field is also called a Galois Field in honor of Evariste Galois, a significant French mathematician in the area of Abstract Algebra who died at age 20.
3. 3. History of Program Derivation ● Hoare's 1969 paper An Axiomatic Basis for Computer Programming essentially created the field of Formal Methods in CS. ● Dijkstra's paper Guarded Commands, Nondeterminacy and Formal Derivation of Programs introduced the idea of program derivation. ● Gries' book The Science of Programming brings Dijkstra's paper to a level undergrad CS and Math majors can understand.
4. 4. Guarded Command Language This is part of the language that Dijkstra defined: ● S1 ;S2 – Perform S1 , and then perform S2 . ● x:=e – Assign the value of e to the variable x. ● if[b1 S→ 1 ][b2 S→ 2 ]…fi – Execute exactly one of the guarded commands (i.e. S1 , S2 , … ) whose corresponding guard (i.e. b1 , b2 , … ) is true, if any. ● do[b1 S→ 1 ][b2 S→ 2 ]…od – Execute the command if[b1 S→ 1 ][b2 S→ 2 ]…fi until none of the guards are true.
5. 5. The Weakest Precondition Predicate Transformer wp ● Consider the mapping wp: P⨯L   → L where P is the set of all finite-length programs and L is the set of all logical statements about the state of a computer. ● For S∊P and R∊L, wp(S,R) yields the “weakest” Q∊L such that execution of S from within any state satisfying Q yields a state satisfying R. ● With regard to this definition, we say a statement A is “weaker” than a statement B if and only if the set of states satisfying B is a proper subset of the set of states satisfying A.
6. 6. Some Notable Properties of wp ● wp([S1 ;S2 ],R) = wp(S1 ,wp(S2 ,R)) ● wp([x:=e],R) = R, substituting e for x ● wp([if[b1 S→ 1 ][b2 S→ 2 ]…fi],R) = (b1 ∨b2 …∨ )   (∧ b1 wp(S→ 1 ,R))     (∧ b2 wp(S→ 2 ,R))   …∧ ● wp([do[b1 S→ 1 ][b2 S→ 2 ]…od],R) = (R ~b∧ 1 ~b∧ 2 …∧ )   wp([if[b∨ 1 S→ 1 ][b2 S→ 2 ]… fi],R)     wp([if…],wp([if…],R))∨     wp([if…],wp([if…],wp([if…],R)))∨     …∨ (for finitely many recursions)
7. 7. The Program Derivation Process For precondition Q∊L and postcondition R∊L, find S∊P such that Q=wp(S,R). ● Gather as much information as possible about the precondition and postcondition. ● Reduce the problem to previously solved ones whenever possible. ● Look for a loop invariant that gives clues on how to implement the program. ● If you are stuck, consider alternative representations of the data.
8. 8. Conditions and Background for the Multiplicative Inverse in GF ● The precondition is that a and b be coprime natural numbers. ● The postcondition is that x be the multiplicative inverse of a modulo b. ● Since the greatest common divisor of a and b is 1, Bezout's Identity yields ax+by=1, where x is the multiplicative inverse of a. ● Recall that gcd(a,b)=gcd(a­b,b)=gcd(a,b­a).
9. 9. Analyzing Properties of the Multiplicative Inverse in GF ● Combining Bezout's Identity and the given property of gcd, we get ax+by = gcd(a,b)       = gcd(a,b­a)       = au+(b­a)v       = au+bv­av       = a(u­v)+bv ● Since ax differs from a(u­v) by a constant multiple of b, we get x (u­v) mod b≡ . ● Solving for u, we see u (x+v) mod b≡ , which leads us to wonder if u and v may be linear combinations of x and y.
10. 10. Towards a Loop Invariant for the Multiplicative Inverse in GF ● Rewriting Bezout's Identity using this, we get ax+by=a(1x+0y)+b(0x+1y)      =a((1x+0y)+y­y)+b(0x+1y)      =a(x+y­y)+by      =a(x+y)­ay+by      =a(x+y)+(b­a)y      =au+(b­a)y (so we deduce that v=y) ● Note that assigning c:=b­a and z:=x+y would yield ax+by=az+cy.
11. 11. Finding the Loop Invariant for the Multiplicative Inverse in GF ● Remembering that u and v are linear combinations of x and y, we see that by reducing the values of a and b as in the Euclidean Algorithm gives a1 u1 +b1 v1 =a1 (ca1x x+ca1y y)          +b1 (cb1x x+cb1y y)        =a2 (ca1x x+ca1y y)          +b1 ((cb1x ­ca1x )x               +(cb1y ­ca1y )y)        = …  ● After the completion of the Euclidean algorithm, we will have gcd(a,b)(cxf x+cyf y)=1.
12. 12. Algorithm for the Multiplicative Inverse in GF multinv(a,b) { x:=1; y:=0 do a>b   a:=a­b; x:=x+y→ b>a   b:=b­a; y:=y+x→ od return x }
13. 13. C Implementation of the Multiplicative Inverse in GF
14. 14. Conditions and Background of the Matrix Product in GF ● The precondition is that the number of columns of A and the number of rows of B are equal. ● The postcondition is that C is the matrix product of A and B. ● The definition of the matrix product allows the elements of C be built one at a time, which seems to be a particularly straight- forward approach to the problem.
15. 15. Loop Invariant of the Matrix Product in GF ● A good loop invariant would be that all elements of C which either have a row index less than i or else have a row index equal to i and have a column index less than or equal to j have the correct value. ● The loop clearly heads toward termination given that C is filled from left to right, from top to bottom (which will occur if the value of j is increased modulo the number of columns after every calculation, increasing i by 1 every time j returns to 0).
16. 16. C Implementation of the Matrix Product in GF
17. 17. Conditions and Background of the Determinant of a Matrix in GF ● The precondition is that the number of rows and the number of columns of A are equal. ● The postcondition is that d is the determinant of A. ● The naive approach to the problem is not very efficient, but it is much easier to explain and produces cleaner code.
18. 18. The Loop Invariant of the Determinant of a Matrix in GF ● The loop invariant of the naive determinant algorithm is that d is equal to the sum for all k<j of the product of A1k and the determinant of the matrix formed by all the elements of A except those in the first row and kth column. ● The loop progresses toward termination so long as the difference between the number of columns and j decreases.
19. 19. Conditions and Background of the Cofactor Matrix in GF ● The precondition is that the number of rows and the number of columns of A are equal. ● The postcondition is that C is the cofactor matrix of A. ● Like the matrix product, the cofactor matrix can readily be generated one element at a time.
20. 20. The Loop Invariant of the Cofactor Matrix in GF ● The loop invariant of the cofactor matrix algorithm is, like the matrix multiplication algorithm, that for all entries in C whose row is less than I or whose row is equal to I and whose column is less than j.
21. 21. Conditions and Background of the Matrix Inverse in GF ● The precondition is that the number of rows and the number of columns of A are equal, and that the determinant of A be coprime to the order of GF. ● The postcondition is that B is the matrix inverse of A. ● Like the matrix product and the cofactor matrix, the matrix inverse can readily be generated one element at a time
22. 22. Loop Invariant of the Matrix Inverse in GF ● The loop invariant of the matrix inverse algorithm is, like the matrix multiplication algorithm, that for all entries in C whose row is less than I or whose row is equal to I and whose column is less than j.
23. 23. Applications ● Matrices over GF have many applications within Information Theory, including Compression, Digital Signal Processing, and Cryptography. ● The classic Hill cipher is a well-known example of a use of matrix operations over GF. ● Most modern block ciphers also use matrices over GF, specifically the S-boxes of ciphers like Rijndael (a.k.a. AES).