0
Upcoming SlideShare
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Standard text messaging rates apply

# Program Derivation of Matrix Operations in GF

82

Published on

The original version of my undergraduate research presentation that I was graded on (I got an A, but this version is certainly inferior to the later version of the presentation, by which time I also …

The original version of my undergraduate research presentation that I was graded on (I got an A, but this version is certainly inferior to the later version of the presentation, by which time I also had better insight into my results).

Published in: Technology
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total Views
82
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
4
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Transcript

• 1. Program Derivation of Matrix Operations in GF Charles Southerland Dr. Anita Walker East Central University
• 2. The Galois Field &#x25CF; A finite field is a finite set and two operators (analogous to addition and multiplication) over which certain properties hold. &#x25CF; An important finding in Abstract Algebra is that all finite fields of the same order are isomorphic. &#x25CF; A finite field is also called a Galois Field in honor of Evariste Galois, a significant French mathematician in the area of Abstract Algebra who died at age 20.
• 3. History of Program Derivation &#x25CF; Hoare's 1969 paper An Axiomatic Basis for Computer Programming essentially created the field of Formal Methods in CS. &#x25CF; Dijkstra's paper Guarded Commands, Nondeterminacy and Formal Derivation of Programs introduced the idea of program derivation. &#x25CF; Gries' book The Science of Programming brings Dijkstra's paper to a level undergrad CS and Math majors can understand.
• 4. Guarded Command Language This is part of the language that Dijkstra defined: &#x25CF; S1 ;S2 &#x2013; Perform S1 , and then perform S2 . &#x25CF; x:=e &#x2013; Assign the value of e to the variable x. &#x25CF; if[b1 S&#x2192; 1 ][b2 S&#x2192; 2 ]&#x2026;fi &#x2013; Execute exactly one of the guarded commands (i.e. S1 , S2 , &#x2026; ) whose corresponding guard (i.e. b1 , b2 , &#x2026; ) is true, if any. &#x25CF; do[b1 S&#x2192; 1 ][b2 S&#x2192; 2 ]&#x2026;od &#x2013; Execute the command if[b1 S&#x2192; 1 ][b2 S&#x2192; 2 ]&#x2026;fi until none of the guards are true.
• 5. The Weakest Precondition Predicate Transformer wp &#x25CF; Consider the mapping wp:&#xA0;P&#x2A2F;L&#xA0; &#xA0;&#x2192; L where P is the set of all finite-length programs and L is the set of all logical statements about the state of a computer. &#x25CF; For S&#x220A;P and R&#x220A;L, wp(S,R) yields the &#x201C;weakest&#x201D; Q&#x220A;L such that execution of S from within any state satisfying Q yields a state satisfying R. &#x25CF; With regard to this definition, we say a statement A is &#x201C;weaker&#x201D; than a statement B if and only if the set of states satisfying B is a proper subset of the set of states satisfying A.
• 6. Some Notable Properties of wp &#x25CF; wp([S1 ;S2 ],R)&#xA0;=&#xA0;wp(S1 ,wp(S2 ,R)) &#x25CF; wp([x:=e],R)&#xA0;=&#xA0;R, substituting e for x &#x25CF; wp([if[b1 S&#x2192; 1 ][b2 S&#x2192; 2 ]&#x2026;fi],R) =&#xA0;(b1 &#x2228;b2 &#x2026;&#x2228; )&#xA0; &#xA0;(&#x2227; b1 wp(S&#x2192; 1 ,R)) &#xA0;&#xA0; &#xA0;(&#x2227; b2 wp(S&#x2192; 2 ,R))&#xA0; &#xA0;&#x2026;&#x2227; &#x25CF; wp([do[b1 S&#x2192; 1 ][b2 S&#x2192; 2 ]&#x2026;od],R) =&#xA0;(R ~b&#x2227; 1 ~b&#x2227; 2 &#x2026;&#x2227; )&#xA0; &#xA0;wp([if[b&#x2228; 1 S&#x2192; 1 ][b2 S&#x2192; 2 ]&#x2026; fi],R) &#xA0;&#xA0; &#xA0;wp([if&#x2026;],wp([if&#x2026;],R))&#x2228; &#xA0;&#xA0; &#xA0;wp([if&#x2026;],wp([if&#x2026;],wp([if&#x2026;],R)))&#x2228; &#xA0;&#xA0; &#xA0;&#x2026;&#x2228; (for finitely many recursions)
• 7. The Program Derivation Process For precondition Q&#x220A;L and postcondition R&#x220A;L, find S&#x220A;P such that Q=wp(S,R). &#x25CF; Gather as much information as possible about the precondition and postcondition. &#x25CF; Reduce the problem to previously solved ones whenever possible. &#x25CF; Look for a loop invariant that gives clues on how to implement the program. &#x25CF; If you are stuck, consider alternative representations of the data.
• 8. Conditions and Background for the Multiplicative Inverse in GF &#x25CF; The precondition is that a and b be coprime natural numbers. &#x25CF; The postcondition is that x be the multiplicative inverse of a modulo b. &#x25CF; Since the greatest common divisor of a and b is 1, Bezout's Identity yields ax+by=1, where x is the multiplicative inverse of a. &#x25CF; Recall that gcd(a,b)=gcd(a&#xAD;b,b)=gcd(a,b&#xAD;a).
• 9. Analyzing Properties of the Multiplicative Inverse in GF &#x25CF; Combining Bezout's Identity and the given property of gcd, we get ax+by&#xA0;=&#xA0;gcd(a,b) &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=&#xA0;gcd(a,b&#xAD;a) &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=&#xA0;au+(b&#xAD;a)v &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=&#xA0;au+bv&#xAD;av &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=&#xA0;a(u&#xAD;v)+bv &#x25CF; Since ax differs from a(u&#xAD;v) by a constant multiple of b, we get x (u&#xAD;v)&#xA0;mod&#xA0;b&#x2261; . &#x25CF; Solving for u, we see u (x+v)&#xA0;mod&#xA0;b&#x2261; , which leads us to wonder if u and v may be linear combinations of x and y.
• 10. Towards a Loop Invariant for the Multiplicative Inverse in GF &#x25CF; Rewriting Bezout's Identity using this, we get ax+by=a(1x+0y)+b(0x+1y) &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=a((1x+0y)+y&#xAD;y)+b(0x+1y) &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=a(x+y&#xAD;y)+by &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=a(x+y)&#xAD;ay+by &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=a(x+y)+(b&#xAD;a)y &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=au+(b&#xAD;a)y (so we deduce that v=y) &#x25CF; Note that assigning c:=b&#xAD;a and z:=x+y would yield ax+by=az+cy.
• 11. Finding the Loop Invariant for the Multiplicative Inverse in GF &#x25CF; Remembering that u and v are linear combinations of x and y, we see that by reducing the values of a and b as in the Euclidean Algorithm gives a1 u1 +b1 v1 =a1 (ca1x x+ca1y y) &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;+b1 (cb1x x+cb1y y) &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=a2 (ca1x x+ca1y y) &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;+b1 ((cb1x &#xAD;ca1x )x &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;+(cb1y &#xAD;ca1y )y) &#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;=&#xA0;&#x2026;&#xA0; &#x25CF; After the completion of the Euclidean algorithm, we will have gcd(a,b)(cxf x+cyf y)=1.
• 12. Algorithm for the Multiplicative Inverse in GF multinv(a,b)&#xA0;{ x:=1;&#xA0;y:=0 do a&gt;b&#xA0; &#xA0;a:=a&#xAD;b;&#xA0;x:=x+y&#x2192; b&gt;a&#xA0; &#xA0;b:=b&#xAD;a;&#xA0;y:=y+x&#x2192; od return&#xA0;x }
• 13. C Implementation of the Multiplicative Inverse in GF
• 14. Conditions and Background of the Matrix Product in GF &#x25CF; The precondition is that the number of columns of A and the number of rows of B are equal. &#x25CF; The postcondition is that C is the matrix product of A and B. &#x25CF; The definition of the matrix product allows the elements of C be built one at a time, which seems to be a particularly straight- forward approach to the problem.
• 15. Loop Invariant of the Matrix Product in GF &#x25CF; A good loop invariant would be that all elements of C which either have a row index less than i or else have a row index equal to i and have a column index less than or equal to j have the correct value. &#x25CF; The loop clearly heads toward termination given that C is filled from left to right, from top to bottom (which will occur if the value of j is increased modulo the number of columns after every calculation, increasing i by 1 every time j returns to 0).
• 16. C Implementation of the Matrix Product in GF
• 17. Conditions and Background of the Determinant of a Matrix in GF &#x25CF; The precondition is that the number of rows and the number of columns of A are equal. &#x25CF; The postcondition is that d is the determinant of A. &#x25CF; The naive approach to the problem is not very efficient, but it is much easier to explain and produces cleaner code.
• 18. The Loop Invariant of the Determinant of a Matrix in GF &#x25CF; The loop invariant of the naive determinant algorithm is that d is equal to the sum for all k&lt;j of the product of A1k and the determinant of the matrix formed by all the elements of A except those in the first row and kth column. &#x25CF; The loop progresses toward termination so long as the difference between the number of columns and j decreases.
• 19. Conditions and Background of the Cofactor Matrix in GF &#x25CF; The precondition is that the number of rows and the number of columns of A are equal. &#x25CF; The postcondition is that C is the cofactor matrix of A. &#x25CF; Like the matrix product, the cofactor matrix can readily be generated one element at a time.
• 20. The Loop Invariant of the Cofactor Matrix in GF &#x25CF; The loop invariant of the cofactor matrix algorithm is, like the matrix multiplication algorithm, that for all entries in C whose row is less than I or whose row is equal to I and whose column is less than j.
• 21. Conditions and Background of the Matrix Inverse in GF &#x25CF; The precondition is that the number of rows and the number of columns of A are equal, and that the determinant of A be coprime to the order of GF. &#x25CF; The postcondition is that B is the matrix inverse of A. &#x25CF; Like the matrix product and the cofactor matrix, the matrix inverse can readily be generated one element at a time
• 22. Loop Invariant of the Matrix Inverse in GF &#x25CF; The loop invariant of the matrix inverse algorithm is, like the matrix multiplication algorithm, that for all entries in C whose row is less than I or whose row is equal to I and whose column is less than j.
• 23. Applications &#x25CF; Matrices over GF have many applications within Information Theory, including Compression, Digital Signal Processing, and Cryptography. &#x25CF; The classic Hill cipher is a well-known example of a use of matrix operations over GF. &#x25CF; Most modern block ciphers also use matrices over GF, specifically the S-boxes of ciphers like Rijndael (a.k.a. AES).