Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

No Downloads

Total views

1,481

On SlideShare

0

From Embeds

0

Number of Embeds

13

Shares

0

Downloads

32

Comments

0

Likes

2

No embeds

No notes for slide

- 1. Security Procedures Y.C. Stamatiou Department of Mathematics, University of Ioannina and Research and Academic Computer Technology Institute Master Program in Web Science, Veroia, March 2010
- 2. Cryptography! It is all about the following simple, but highly important, scenario:
- 3. Cryptanalysis
- 4. What is used in Cryptology? <ul><li>Cryptography: </li></ul><ul><ul><li>Linear algebra, abstract algebra, number theory </li></ul></ul><ul><li>Cryptanalysis: </li></ul><ul><ul><li>Probability, statistics, combinatorics, computing </li></ul></ul><ul><li>But the foundations lie in Complexity Theory! </li></ul><ul><li>In essence, cryptology resulted from a “collaboration” between Number Theory and Complexity Theory! </li></ul>
- 5. Turing machine: The mathematical model of the computer! # 0 1 0 1 q 0 q 1 q n (q 1 ,0) (q 2 ,1, ) <ul><li>Infinite tape divided into cells (memory) </li></ul><ul><li>Each cell can hold one input/output symbol, usually a bit (0 ή 1) , or the blank (#) </li></ul><ul><li>A head that can read/write a cell and move about on the tape </li></ul><ul><li>A “decision making” mechanism (state transition) </li></ul>ALAN TURING
- 6. An algorithm ! The “program” below computes the difference between two positive integers m and n ( only if m > n, otherwise it “returns” 0) given in the form 0 m 10 n on the tape of the Turing machine ( isn’t it, a bit, reminiscent of good, old Assembly?): q 0 q 1 q 2 q 3 q 4 q 5 q 6 0 (q 1 ,#, Δ) (q 1 , 0 , Δ) (q 3 , 1 , Α) (q 3 , 0 , Α) (q 4 , 0 , Α) (q 5 ,#, Δ) - ( stops ) 1 (q 5 ,#, Δ) (q 2 , 1 , Δ) (q 2 , 1 , Δ) (q 3 , 1 , Α) (q 4 ,#, Α) (q 5 ,#, Δ) - ( stops ) # - ( hangs ) (q 4 ,#, Α) (q 0 , # , Δ) (q 6 , 0 , Δ) (q 6 ,#, Δ) - ( stops )
- 7. <ul><li>Memory (number of tape cells/memory locations used) </li></ul><ul><li>Time (number of movements of the read/write head ) </li></ul><ul><li>Time/space complexity functions, where n is the size of the input: </li></ul><ul><li>It is important not to have combinatorial explosion for these functions so as to avoid exponential increase in time/space requirements as the input size increases </li></ul><ul><li>The complexity functions that avoid the combinatorial explosion are called polynomial </li></ul><ul><li>An important note! The size of, e.g., an array or a list of numbers is roughly equal to the number of elements! The size of an integer n is not n, but log n (the base is immaterial )! </li></ul>Computation resources t(n), s(n)
- 8. Observe how the functions that are bounded from above by a polynomial have “reasonable” rate of increase !
- 9. Two important time complexity classes of problems P : Problems for which there exists a polynomial time deterministic Turing machine (algorithm) that solves them NP : Problems for which no polynomial time deterministic Turing machine has been discovered, yet, that solves them but for which a polynomial time non -deterministic Turing machine exists!
- 10. Integers! God made the integers; all else is the work of man Leopold Kronecker (1823 – 1891)
- 11. Primes: the building blocks of integers! <ul><li>prime numbers are integers greater than 1 that have as divisors 1 and self </li></ul><ul><ul><li>i.e., they cannot be written as a product of other integers </li></ul></ul><ul><li>e.g. 2, 3, 5, 7 are prime but 4, 6, 8, 9, 10 are not </li></ul><ul><li>prime numbers are central to number theory </li></ul><ul><li>list of prime number less than 200 is: </li></ul><ul><ul><li>2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 </li></ul></ul><ul><li>The set of primes is infinite (Euclid) </li></ul>From Wolfram Demonstration Projects
- 12. Prime Factorisation <ul><li>to factor an integer n is to write it as a product of other numbers greater than 1 </li></ul><ul><li>the prime factorisation of an integer n is its decomposition into a product of primes </li></ul><ul><ul><li>e.g. 91=7x13, 3600=2 4 x3 2 x5 2 </li></ul></ul><ul><li>Important! </li></ul><ul><li>Factoring an integer is hard compared to the ease of multiplying the factors together to generate the integer! </li></ul>
- 13. Relatively Prime Numbers & GCD <ul><li>Two integers a and b are relatively prime if they have no common divisors </li></ul><ul><ul><li>e.g. 8 & 15 are relatively prime since the factors of 8 are 2,4,8 and of 15 are 3,5,15 – no common factor exists </li></ul></ul><ul><li>Conversely, we can determine the Greatest Common Divisor (GCD) by comparing their prime factorizations and using least powers </li></ul><ul><ul><li>e.g. 300 =2 1 x3 1 x5 2 18=2 1 x3 2 hence gcd(18,300)=2 1 x3 1 x5 0 =6 </li></ul></ul><ul><li>Of course, GCDs are computed much faster with Euclid’s algorithm ! </li></ul>
- 14. Fermat's Little Theorem (FLT) <ul><li>The following holds: </li></ul><ul><li>a p-1 = 1 (mod p) </li></ul><ul><ul><li>where p is prime, with gcd(a,p)=1 (i.e. a, p are coprime ) </li></ul></ul><ul><li>Also: a p = p (mod p) </li></ul><ul><li>Useful result in public key cryptography and primality testing </li></ul>
- 15. Euler Totient Function φ( n) <ul><li>when doing arithmetic (addition/multiplication) modulo n </li></ul><ul><li>complete set of residues is: 0..n-1 (i.e. the set of remainders when an integer is divided by n ) </li></ul><ul><li>reduced set of residues is those numbers (residues) which are relatively prime to n </li></ul><ul><ul><li>e.g. for n = 10: </li></ul></ul><ul><ul><li>The complete set of residues is {0,1,2,3,4,5,6,7,8,9}. </li></ul></ul><ul><ul><li>The reduced set of residues is {1,3,7,9}. </li></ul></ul><ul><li>The number of elements in reduced set of residues is called the Euler Totient Function φ (n) </li></ul>
- 16. <ul><li>to compute φ (n) need to count number of residues to be excluded </li></ul><ul><li>in general need prime factorization, but </li></ul><ul><ul><li>for p prime φ (p) = p-1 </li></ul></ul><ul><ul><li>for p . q primes φ (pq) =(p-1)x(q-1) </li></ul></ul><ul><li>e . g. </li></ul><ul><ul><li>φ (37) = 36 </li></ul></ul><ul><ul><li>φ (21) = (3–1)x(7–1) = 2x6 = 12 </li></ul></ul>Euler Totient Function ø(n)
- 17. Euler's Theorem <ul><li>a generalisation of Fermat's Theorem </li></ul><ul><li>a φ (n) = 1(mod n) </li></ul><ul><ul><li>for any a , n where gcd(a,n)=1 </li></ul></ul><ul><li>e . g. </li></ul><ul><ul><li>a = 3; n = 10; φ (10) = 4 </li></ul></ul><ul><ul><li>hence 3 4 = 81 = 1 mod 10 </li></ul></ul><ul><ul><li>a = 2; n = 11; φ (11) = 10 </li></ul></ul><ul><ul><li>hence 2 10 = 1024 = 1 mod 11 </li></ul></ul>
- 18. Primality Testing <ul><li>often need to find large prime numbers </li></ul><ul><li>traditionally sieve using trial division </li></ul><ul><ul><li>ie. divide by all numbers (primes) in turn less than the square root of the number </li></ul></ul><ul><ul><li>only works for small numbers </li></ul></ul><ul><li>alternatively can use statistical primality tests based on properties of primes </li></ul><ul><ul><li>for which all primes numbers satisfy property </li></ul></ul><ul><ul><li>but some composite numbers, called pseudo-primes, also satisfy the property </li></ul></ul><ul><li>can use a slower deterministic primality test </li></ul>
- 19. The Miller Rabin Test <ul><li>A primality test based on Fermat’s Theorem ( observe, however, this theorem is not an “if and only if” theorem!): </li></ul><ul><li>We have the Miller-Rabin primality test </li></ul><ul><li>This is a probabilistic , polynomial time algorithm </li></ul><ul><li>The AKS primality test: deterministic , polynomial time algorithm </li></ul>
- 20. <ul><li>Algorithm Miller-Rabin probabilistic primality test MILLER-RABIN ( n,t ) INPUT: an odd integer n 3 and security parameter t 1. OUTPUT: an answer “prime” or “composite”. </li></ul><ul><li>1. Write n – 1 = 2 s r such that r is odd. 2. For i from 1 to t do the following: 2.1 Choose a random integer a, 2 a n – 2. 2.2 Compute y = a r mod n. 2.3 If y 1 and y n – 1 then do the following: j 1. While j s – 1 and y n – 1 do the following: Compute y y 2 mod n. If y 1 then return (“composite”). If y n – 1 then return (“composite”). </li></ul><ul><li> j j + 1. 3. Return (“prime”). </li></ul>
- 21. Probabilistic Considerations <ul><li>if Miller-Rabin returns “composite” the number is definitely not prime </li></ul><ul><li>otherwise is a prime or a pseudo-prime </li></ul><ul><li>chance it detects a pseudo-prime is < 1 / 4 </li></ul><ul><li>hence if repeat test with different random a then chance n is prime after t tests is: </li></ul><ul><ul><li>Pr( n prime after t tests) = 1 – ( 1 / 4 ) t </li></ul></ul><ul><ul><li>This converges exponentially fast to 1 </li></ul></ul><ul><ul><li>e.g. for t = 10 this probability is > 0.99999 </li></ul></ul>
- 22. Prime Number Distribution <ul><li>The prime number theorem states that primes occur roughly every ln(n ) integers, thus prime numbers abound! </li></ul><ul><li>However, even numbers can be ignored immediately </li></ul><ul><li>Thus, in practice one needs only to test 0.5ln(n) numbers of size n to locate a prime </li></ul><ul><ul><li>note this is only the “average” </li></ul></ul><ul><ul><li>sometimes primes are close together and other times are quite far apart </li></ul></ul>
- 23. Chinese Remainder Theorem <ul><li>Used to speed up modulo computations if working modulo a product of numbers </li></ul><ul><ul><li>e.g. mod M = m 1 m 2 ..m k </li></ul></ul><ul><li>Chinese Remainder theorem lets us work in each moduli m i separately </li></ul><ul><li>Since computational cost is proportional to size, this is faster than working in the full modulus M </li></ul>
- 24. Chinese Remainder Theorem <ul><li>can implement CRT in several ways </li></ul><ul><li>to compute A(mod M) </li></ul><ul><ul><li>first compute all a i = A mod m i separately </li></ul></ul><ul><ul><li>determine constants c i below, where M i = M/m i </li></ul></ul><ul><ul><li>then combine results to get answer using: </li></ul></ul>
- 25. Primitive Roots <ul><li>from Euler’s theorem have a φ (n) mod n=1 </li></ul><ul><li>consider a m =1 (mod n), gcd(a,n)=1 </li></ul><ul><ul><li>must exist for m = φ (n) but may be smaller </li></ul></ul><ul><ul><li>once powers reach m, cycle will repeat </li></ul></ul><ul><li>if smallest is m = φ (n) then a is called a primitive root </li></ul><ul><li>if p is prime, then successive powers of a "generate" the group mod p </li></ul><ul><li>these are useful but relatively hard to find </li></ul>
- 26. Discrete Logarithms <ul><li>the inverse problem to exponentiation is to find the discrete logarithm of a number modulo p </li></ul><ul><li>that is to find x such that y = g x (mod p) </li></ul><ul><li>this is written as x = log g y (mod p) </li></ul><ul><li>if g is a primitive root then it always exists, otherwise it may not, e . g. </li></ul><ul><ul><li>x = log 3 4 mod 13 does not exist </li></ul></ul><ul><ul><li>x = log 2 3 mod 13 = 4 (e.g. by trying successive powers) </li></ul></ul><ul><li>whilst exponentiation is relatively easy, finding discrete logarithms is generally a computationally hard problem much like the factoring problem . </li></ul>
- 27. One-Way Functions: Number Theory meets Complexity Theory! <ul><li>A function f: D R is called one-way if: </li></ul><ul><ul><li>Computing f(x) is “ easy ” (i.e. polynomial fast) . </li></ul></ul><ul><ul><li>Computing f -1 (y) for almost all the images is “ hard ” . </li></ul></ul><ul><li>e.g. (under the Discrete Logarithm assumption) </li></ul><ul><ul><li>Prime p and a generator g of Z p * . </li></ul></ul><ul><ul><li>f(x) = g x (mod p). </li></ul></ul>
- 28. Public key cryptography
- 29. Public key cryptography <ul><li>Factoring related: </li></ul><ul><ul><li>RSA, Rabin </li></ul></ul><ul><li>Discrete-log related: </li></ul><ul><ul><li>Diffie-Hellman (El Gamal) </li></ul></ul><ul><ul><li>Elliptic curves </li></ul></ul><ul><li>Modern Lattice Based </li></ul><ul><ul><li>Ajtai-Dwork: only one for which worst case to hardness reduction is known </li></ul></ul><ul><ul><li>Goldreich-Goldwasser and Halevi </li></ul></ul>
- 30. RSA <ul><li>Invented by Rivest, Shamir and Adleman in 1978 </li></ul><ul><li>Based on difficulty of factoring. </li></ul><ul><li>Used to “hide” the size of a group Z n * since: </li></ul><ul><li>Factoring has not been reduced to RSA </li></ul><ul><ul><li>an algorithm that generates m from c does not give an efficient algorithm for factoring </li></ul></ul><ul><li>On the other hand, factoring has been reduced to finding the private-key. </li></ul><ul><ul><li>there is an efficient algorithm for factoring given one that can find the private key. </li></ul></ul>
- 31. RSA Public-key Cryptosystem <ul><li>What we need: </li></ul><ul><li>p and q, primes of approximately the same size </li></ul><ul><li>n = pq φ (n) = (p-1)(q-1) </li></ul><ul><li>e Z φ (n) * </li></ul><ul><li>d = e -1 mod φ (n) </li></ul>Public Key : (e,n) Private Key : d Encode : m Z n E(m) = m e mod n Decode : D(c) = c d mod n
- 32. RSA continued <ul><li>Why it works: </li></ul><ul><li>D(c) = c d mod n = c d mod pq </li></ul><ul><li> = m ed mod pq </li></ul><ul><li> = m 1 + k(p-1)(q-1) mod pq </li></ul><ul><li> = m · (m p-1 ) k(q-1) mod pq = m · (m q-1 ) k(p-1) mod pq </li></ul>Chinese Remainder Theorem: If p and q are relatively prime, and a = b mod p and a = b mod q, then a = b mod pq. m · (m p-1 ) k(q-1) = m mod p m · (m q-1 ) k(p-1) = m mod q D(c) = m mod pq
- 33. RSA computations <ul><li>To generate the keys , we need to </li></ul><ul><ul><li>Find two primes p and q. Generate candidates and use primality testing to filter them. </li></ul></ul><ul><ul><li>Find e -1 mod (p-1)(q-1). Use Euclid’s algorithm. Takes time log 2 (n) </li></ul></ul><ul><li>To encode and decode </li></ul><ul><ul><li>Take m e or c d . Use the power method. Takes time log(e) log 2 (n) and log(d) log 2 (n) . </li></ul></ul><ul><li>In practice e is selected to be small so that encoding is fast. </li></ul>
- 34. Security of RSA <ul><li>Warning: </li></ul><ul><ul><li>Do not use this or any other algorithm naively! </li></ul></ul><ul><li>Possible security holes: </li></ul><ul><ul><li>Need to use “safe” primes p and q. In particular p-1 and q-1 should have large prime factors. </li></ul></ul><ul><ul><li>p and q should not have the same number of digits. Can use a middle attack starting at sqrt(n). </li></ul></ul><ul><ul><li>e cannot be too small </li></ul></ul><ul><ul><li>Don’t use same n for different e’s. </li></ul></ul><ul><ul><li>You should always “pad” </li></ul></ul>
- 35. Algorithm to factor given d and e <ul><li>If an attacker has an algorithm that generates d from e, then he/she can factor n in PPT. Variant of the Rabin-Miller primality test. </li></ul><ul><li>Function TryFactor(e,d,n) </li></ul><ul><ul><li>write ed – 1 as 2 s r, r odd </li></ul></ul><ul><ul><li>choose w at random < n </li></ul></ul><ul><ul><li>v = w r mod n </li></ul></ul><ul><ul><li>if v = 1 then return(fail) </li></ul></ul><ul><ul><li>while v 1 mod n </li></ul></ul><ul><ul><li>v 0 = v </li></ul></ul><ul><ul><li>v = v 2 mod n </li></ul></ul><ul><ul><li>if v 0 = n - 1 then return(fail) </li></ul></ul><ul><ul><li>return(pass, gcd(v 0 + 1, n)) </li></ul></ul>LasVegas algorithm Probability of pass is > .5. Will return p or q if it passes. Try until you pass . w 2 s r = w ed-1 = w k φ = 1 mod n v 0 2 = 1 mod n (v 0 – 1)(v 0 + 1)= k’n
- 36. RSA in the “Real World” <ul><li>Part of many standards : PKCS, ITU X.509, ANSI X9.31, IEEE P1363 </li></ul><ul><li>Used by : SSL, PEM, PGP, Entrust, … </li></ul><ul><li>The standards specify many details on the implementation, e.g. </li></ul><ul><ul><ul><li>e should be selected to be small, but not too small </li></ul></ul></ul><ul><ul><ul><li>“ multi prime” versions make use of n = pqr… this makes it cheaper to decode especially in parallel (uses Chinese remainder theorem). </li></ul></ul></ul>
- 37. Factoring in the Real World <ul><li>Quadratic Sieve (QS): </li></ul><ul><ul><li>Used in 1994 to factor a 129 digit (428-bit) number. 1600 Machines, 8 months. </li></ul></ul><ul><li>Number field Sieve (NFS): </li></ul><ul><ul><li>Used in 1999 to factor 155 digit (512-bit) number. 35 CPU years. At least 4x faster than QS </li></ul></ul><ul><li>The RSA Challenge numbers </li></ul>
- 38. ElGamal <ul><li>Based on the difficulty of the discrete log problem. </li></ul><ul><li>Invented in 1985 </li></ul><ul><li>Digital signature and Key-exchange variants </li></ul><ul><ul><li>DSA based on ElGamal AES standard </li></ul></ul><ul><ul><li>Incorporated in SSL (as is RSA) </li></ul></ul><ul><ul><li>Public Key used by TRW (avoided RSA patent) </li></ul></ul><ul><li>Works over various groups </li></ul><ul><ul><li>Z p , </li></ul></ul><ul><ul><li>Multiplicative group GF(p n ), </li></ul></ul><ul><ul><li>Elliptic Curves </li></ul></ul>
- 39. ElGamal Public-key Cryptosystem <ul><li>(G,*) is a group </li></ul><ul><li>α a generator for G </li></ul><ul><li>a Z |G| </li></ul><ul><li>β = α a </li></ul><ul><li>G is selected so that it is hard to solve the discrete log problem. </li></ul>Public Key : ( α , β ) and some description of G Private Key : a Encode : Pick random k Z |G| E(m) = (y 1 , y 2 ) = ( α k , m * β k ) Decode : D(y) = y 2 * (y 1 a ) -1 = (m * β k ) * ( α ka ) -1 = m * β k * ( β k ) -1 = m You need to know a to easily decode y!
- 40. ElGamal: Example <ul><li>G = Z 11 * </li></ul><ul><li>α = 2 </li></ul><ul><li>a = 8 </li></ul><ul><li>β = 2 8 (mod 11) = 3 </li></ul>Public Key : (2, 3), Z 11 * Private Key : a = 8 Encode : 7 Pick random k = 4 E(m) = (2 4 , 7 * 3 4 ) = (5, 6) Decode : (5, 6) D(y) = 6 * (5 8 ) -1 = 6 * 4 -1 = 6 * 3 (mod 11) = 7
- 41. Probabilistic Encryption <ul><li>For RSA one message goes to one cipher word. This means we might gain information by running E public (M). </li></ul><ul><li>Probabilistic encryption maps every M to many C randomly. Cryptanalysists can’t tell whether C = E public (M). </li></ul><ul><li>ElGamal is an example (based on the random k), but it doubles the size of message. </li></ul>
- 42. Digital Signatures <ul><li>We focus on electronic signatures that use public-key cryptography. </li></ul><ul><li>E.g. (Based on RSA) </li></ul><ul><ul><li>A key generation algorithm </li></ul></ul><ul><ul><ul><li>Same as in RSA encryption. </li></ul></ul></ul><ul><ul><li>A signing algorithm </li></ul></ul><ul><ul><ul><li>Same as decryption of M Z N * by C=D(M)=M d mod N . </li></ul></ul></ul><ul><ul><li>A verification algorithm </li></ul></ul><ul><ul><ul><li>Same as encryption of C Z N * by M=E(C)=C e mod N . </li></ul></ul></ul><ul><ul><ul><li>Can be calculated and verified by anyone. </li></ul></ul></ul><ul><li>Concept of Blind Signatures … </li></ul>
- 43. Secret Sharing <ul><li>Based on the next problem: </li></ul><ul><li>Assuming that there are N players, how can a dealer share a secret in a way that any group of t ( < N ) or more players could recreate the secret, but any group of less then t players will not be able to do so? </li></ul><ul><li>Such schemes are called (t,N) - threshold secret sharing schemes. </li></ul>
- 44. Shamir Secret Sharing Scheme <ul><li>The dealer selects t-1 random integers, which forms a t-1 degree polynomial f(x) such that f(0) = S . </li></ul><ul><li>The dealer calculates f(i) for each player i . Those are their private shares. </li></ul><ul><li>Any group of t or more players can recreate the polynomial and S (using Lagrange interpolation). </li></ul>
- 45. Threshold Encryption <ul><li>In threshold encryption we have N authorities, and we want to encrypt a message in a way that any t or more authorities could decrypt it. Again, any group of less then t authorities will not be able to do so. </li></ul><ul><li>No trusted dealer. </li></ul><ul><li>Solutions are similar to Shamir ’ s scheme [CGS,Pederson]. </li></ul>
- 46. Zero-knowledge Proofs <ul><li>Interactive protocols between two players, Prover and Verifier, in which the prover proves to the verifier, with high probability, that some statement is true. </li></ul><ul><li>Does not leak any information besides the veracity of this statement. </li></ul><ul><li>In the case of honest verifier ZKP, we can modify the protocol to non-interactive. </li></ul>
- 47. Zero-knowledge Proof Example <ul><li>Let g 1 , g 2 generators of Z q * . </li></ul><ul><li>The Prover claims that log g1 v = log g2 w (=x) for publicly known v, w, g 1 , g 2 . </li></ul><ul><ul><li>P chooses random z [1..q] and sends a=g 1 z , b=g 2 z . </li></ul></ul><ul><ul><li>V selects random c [1..q] and sends it. </li></ul></ul><ul><ul><li>P sends r = (z+cx) </li></ul></ul><ul><ul><li>V verifies that g 1 r =av c and g 2 r =bw c </li></ul></ul><ul><li>Can be turned into non-interactive </li></ul><ul><ul><li>C = Hash(a,b,v,w) </li></ul></ul>
- 48. The Woo-Lam Authentication Protocol <ul><li>Alice tries to prove her identity to Bob but she does not share a key with Bob, only with Trent </li></ul><ul><li>The protocol goes as follows: </li></ul><ul><li>In Step 1 Alice declares her identity </li></ul><ul><li>In Step 2 Bob provides a nonce challenge </li></ul><ul><li>In Step 3 Alice returns the challenge encrypted with K AT </li></ul><ul><li>In Step 4 Bob passes this encrypted information to Trent for translation </li></ul><ul><li>In Step 5 Trent translates the nonce and returns it to Bob – then Bob verifies the nonce </li></ul>
- 49. A weakness … <ul><li>There is a protocol failure in Woo-Lam that comes from the fact that the connection between Bob-to-Trent’s message and Trent-to-Bob’s message is not strong enough </li></ul><ul><li>The only “connection” comes from the fact that message 4 and message 5 happen shortly one after another. </li></ul><ul><li>This weak association can be used in an attack where Eve impersonates Alice: </li></ul><ul><li>Eve tries to authenticate herself to Bob (or Bob’s computer) at about the same time as Alice. </li></ul><ul><li>Trent will respond to each at roughly the same time. </li></ul><ul><li>Eve intercepts both responses, and swaps them. </li></ul><ul><li>Let us see how in a step-by-step description </li></ul>
- 50. Details of the impersonation attack Step 1: Eve, acting as both herself and Alice, attempts to authenticate herself to Bob as both herself and Alice. Step 2: Bob, as he should, replies with two nonce challenges. Eve gets her nonce but, at the same time, intercepts the nonce directed to Alice. Step 3: Eve answers both challenges. Eve, naturally, can only send a wrong reply on behalf of Alice. She can, however, swap her response with Alice’s before contacting Bob. Step 4: Bob receives both responses and contacts Trent for translation. Step 5: Trent responds. One response consists, as expected, of garbage. The other respond, for Alice, is of course correct. Bob gets, correctly, back the challenge he issued for Alice and then authenticates Eve as Alice!
- 51. A way round this problem <ul><li>The problem was (again) that the last message was not tied to the identity of who it corresponded to. </li></ul><ul><li>One simple fix is to make message 5 include Alice’s identity: </li></ul><ul><li>So, Trent tells Bob who the response corresponds to. Then, Bob will be able to tell that message 5’ does not correspond to Eve’s nonce! </li></ul><ul><li>One problem is that Trent does not know what host that Alice is trying to log onto. Eve might get Alice to log onto Eve’s computer. Then Eve can start a logon in Alice’s name to Bob’s machine. Eve then gets Alice to answer Bob’s challenges to Eve… </li></ul><ul><li>Before : Fix: </li></ul>
- 52. The Needham-Schroeder Key Exchange Protocol Step 1: Alice tells Trent what she is requesting Step 2: Trent gives Alice the session key and gives Alice a package to deliver to Bob. Step 3: Bob can get the session key, and the identity of who he is talking with (verified because it came from Trent). Step 4: Bob sends Alice a challenge Step 5: Alice answers challenge
- 53. An attack on Needham-Schroeder <ul><li>In 1981, Denning and Sacco showed if the session key is compromised, then Eve can make Bob think that he is communicating with Alice. </li></ul><ul><li>Assume the NS protocol took place, and that Eve has recorded the first 3 steps. Also, assume that Eve has obtained the session key. </li></ul><ul><li>The following steps subvert NS: </li></ul>Step 1: Eve replays step 3 from NS as if she were Alice. Step 2: Bob gets this message and issues a challenge to Alice in the form of a new nonce. This challenge is intercepted by Eve. Step 3: Since Eve knows the session key, she can respond correctly to the challenge. The basic problem: messages can be replayed once the session key is compromised!
- 54. The morale?
- 55. We will look into how theory and practice meet using two working systems: e-Lotteries! e-Voting!
- 56. <ul><li>A real nationwide electronic lottery </li></ul><ul><ul><li>Frequent number of drawing per day </li></ul></ul><ul><ul><li>Strict drawing times </li></ul></ul><ul><ul><li>Large number of expected players </li></ul></ul><ul><ul><li>Preclusion any participation in the number generation and winner identification processes. </li></ul></ul>A protocol for the support of large-scale national lotteries
- 57. Special System Characteristics <ul><li>Cryptographic robustness </li></ul><ul><li>Protection against various (premature & future) manipulations </li></ul><ul><li>Extensive real-time auditing facilities </li></ul><ul><li>Performance (time constraint) requirements </li></ul><ul><li>Incorporation of Security mechanisms </li></ul><ul><li>System with High –availability </li></ul>
- 58. An overview of the system Agencies Coupon File &Audit Information Audit Information Audit Information Audit Information Data to Optical Signal Connected in high Availability Configuration Optical Fibre Converter To TV Station Telephone lines Lottery Organization Computer Verifier Gen1 Gen2
- 59. Operational Requirements <ul><li>Uniformly Distributed Numbers </li></ul><ul><li>Unpredictable Results </li></ul><ul><li>Prevention of internal/external interference with the drawing mechanism & with the choice of winners </li></ul><ul><li>Constant monitoring towards early detection of interference attempts </li></ul>
- 60. Security & Safety Requirements <ul><li>Confidentiality </li></ul><ul><ul><li>No leaks of information </li></ul></ul><ul><ul><li>Encryption methods </li></ul></ul><ul><ul><li>Secure random number sources </li></ul></ul><ul><li>Integrity </li></ul><ul><ul><li>Authentication request for any step </li></ul></ul><ul><ul><li>Use of Hash and MAC functions </li></ul></ul><ul><li>State Stamping </li></ul><ul><ul><li>Detection of any past or future modification (e.g. coupon file) </li></ul></ul><ul><ul><li>Mainly through cryptographic tools (e.g. Hash functions) </li></ul></ul>
- 61. Security & Safety Requirements <ul><li>Availability </li></ul><ul><ul><li>Service all the authorized requests </li></ul></ul><ul><ul><li>Component and data path replication </li></ul></ul><ul><li>Accountability </li></ul><ul><ul><li>Detection of any unauthorized access to or modification of the system </li></ul></ul><ul><ul><li>Authentication schemes are necessary </li></ul></ul><ul><ul><li>Use of mechanisms for singing and commitment </li></ul></ul>
- 62. Design considerations <ul><li>Randomness Sources </li></ul><ul><li>Seed Commitment & number reproduction </li></ul><ul><li>State Stamping </li></ul><ul><li>Seed processing </li></ul><ul><li>Signing & Authenticating </li></ul>
- 63. Design Considerations Randomness Sources Approaches Disadvantages Advantages Common (e.g. as given by Java) Pseudorandom Number Generators Algorithm is susceptible to clever attacks Uniform distributed numbers Cryptographically Secure PNG In principle they could be guessed, given the initial state. Guessing is intractable however! Based on deterministic algorithms Handles the disadvantage above Truly Random Number Generators Physical processes often obey specific distribution laws They depend on environmental parameters (e.g. temperature) Hard to reproduce their output Non deterministic method, truly random output
- 64. Design Considerations Seed Commitment & Reproduction of received numbers <ul><li>Elimination of any modification on seeds: from the time they are produced until the time that they will be used. </li></ul><ul><li>Bit-Commitment Protocol certifies the integrity and accountability on the connection between the Generator and the Verifier </li></ul><ul><li>The Verifier reproduces the numbers with additional information from generator for a final check. </li></ul>
- 65. Design Considerations State Stamping <ul><li>Prevention of Post-betting </li></ul><ul><li>Elimination any coupon file modification </li></ul><ul><li>Fingerprint (hash value) of coupon file </li></ul><ul><ul><li>Check whether the hash function has the same value before and after the draw. </li></ul></ul><ul><ul><li>If check fails, the protocol should be terminated immediately and reports the modification in highest priority </li></ul></ul><ul><ul><li>rmd160 </li></ul></ul>
- 66. Design Considerations Seed Processing Seed 1 ->Produced from Physical Generator Hash value of The Coupon File Naor-Reingold Pseudorandom Function Input(1) Input(2) <ul><li>NR function is </li></ul><ul><li>initially seeded </li></ul><ul><li>With a strong </li></ul><ul><li>random key </li></ul><ul><li>Seed 2 does not </li></ul><ul><li>depend on (the </li></ul><ul><li>online drawn) </li></ul><ul><li>physical bits </li></ul>Final Seed 2
- 67. Seed Processing <ul><li>Naor-Reingold function </li></ul><ul><li>NR function key is a tuple < P,Q,g,a > </li></ul><ul><li>Where P is a large prime (1000 bits) </li></ul><ul><li>Q is a large prime divisor of P-1 (200 bits) </li></ul><ul><li>g is an element of order Q in Z p * </li></ul><ul><li>And a=<a 0 ,a 1 ,…a n > is an uniformly distributed </li></ul><ul><li>sequence of n+1elements Z Q </li></ul><ul><li>For every input x and n bits, x=x 1 …x n , </li></ul><ul><li>NR function : </li></ul>
- 68. Design Considerations Signing and Authenticating <ul><li>To boost confidentiality and accountability : </li></ul>After Numbers Generation Encryption Scheme Signing Process Numbers & Seeds Verifier
- 69. A high-level description of the protocol Exchange keys for encryption & A private /public key for signature GEN1 VERIFIER Idle Drawing Initiation signal Random bits from the TRNG Hash value of the Coupon’s file Bit-commitment &Signature Seed 1 Seed 2 XOR NR function Generate the Numbers From PRNG Verify and decrypt Seeds & nums Encrypt and sign Seeds & numbers Verify that Gen1Commited on the True seeds From the retrieved seeds Regenerate the numbers System Failed SUCCESS! Check the numbers
- 70. Time Table 6 min before the Draw time 3 min later: If the verifier hasn’t received the numbers, he sends Initiation Signal to Gen2 Gen2 produces the numbers in 3 minutes, on time, with the same processes of the Gen1 Verifier GEN1 Draw initiation signal GEN2 Initiation signal GEN2
- 71. Software random number generators <ul><li>2 algebraic generators </li></ul><ul><ul><li>BBS (proposed by Blum,Blum and Shub), one of the most frequently used Cryptographically strong PRNG </li></ul></ul><ul><ul><li>RSA/Rabin generator based on RSA function </li></ul></ul><ul><li>2 block cipher based generators </li></ul><ul><ul><li>DES and AES </li></ul></ul>
- 72. Physical random number generators <ul><li>We combine three physical generators with XOR </li></ul><ul><ul><li>Based on the phase differences on the two motherboard's clocks (The VonNeumannBytes function) </li></ul></ul><ul><ul><li>ZRANDOM hardware generator </li></ul></ul><ul><ul><li>SG100 hardware generator </li></ul></ul>
- 73. Output Processing <ul><li>Outputs combined with two shuffling algorithms: </li></ul><ul><ul><li>Algorithm M (proposed by MacLaren and Marsalia): takes two input sequences X n and Y n , and is shuffling the sequence X n using elements of the sequence Y n as indexes into the sequence X n </li></ul></ul><ul><ul><li>Algorithm B (proposed by Bays and Durham): is similar to M, with one input sequence, and the output is a shuffled instance of input </li></ul></ul>
- 74. Output Processing <ul><li>Combine the output with XOR operation </li></ul><ul><ul><li>The four generators are combined with bit-wise XOR </li></ul></ul><ul><ul><li>The protocol moves periodically to different combinations of the generators </li></ul></ul>
- 75. Output Testing <ul><li>Statistical tests are applied (Diehard Battery of tests) on: </li></ul><ul><ul><li>The produced random numbers </li></ul></ul><ul><ul><li>The hardware random number generators </li></ul></ul><ul><li>On line tests </li></ul>
- 76. Considerations <ul><li>Many factors should be considered for a robust protocol designed to support an electronic lottery </li></ul><ul><ul><li>The generation of sequences that are exceptionally difficult to guess </li></ul></ul><ul><ul><li>The measures against many possible attacks on the generation and on the entire system operation </li></ul></ul><ul><ul><li>Business management process </li></ul></ul>
- 77. The Issue of Trust <ul><li>Trust plays major role in the way people view and use information systems. </li></ul><ul><li>Trust should be the first priority for eGovernment applications. </li></ul><ul><li>Trust is of great importance for the success of eVoting. </li></ul>
- 78. Our Goal <ul><li>Propose and apply a “trust preserving” approach for handling the increasingly difficult complexity issues of building eVoting systems and, in general, trust-critical eGovernment applications. </li></ul><ul><li>Design and implementation of a secure and efficient eVoting platform with a focus on trust establishment </li></ul>
- 79. <ul><li>Decomposition of eVoting into layers containing basic trust components </li></ul><ul><li> facilitate the management of trust in each component </li></ul><ul><li>Concrete notion of trust components should be taken into consideration by designers of security critical applications in general </li></ul>Our approach
- 80. Pragmatic Trust <ul><li>Pragmatic approach to security critical applications should be based on layering . </li></ul><ul><li>The layered approach to trust reflects the “trust engineering” phases by combining technology, policy and public awareness issues. </li></ul>
- 81. The trust-centered approach
- 82. <ul><li>Scientific Soundness: </li></ul><ul><li>Crypto-based justification of all components </li></ul><ul><li>(e.g. cryptographically secure random number generators, homomorphic functions) </li></ul>Layers of the architecture
- 83. <ul><li>Implementation Soundness: </li></ul><ul><li>Formal methodology for the verification of the implementation </li></ul><ul><li>(applied periodically ) </li></ul>Layers of the architecture
- 84. <ul><li>Internal Operational Soundness: </li></ul><ul><li>High availability and fault tolerance </li></ul><ul><li>(self-auditing, self-checking, self-recovery from malfunction) </li></ul>Layers of the architecture
- 85. <ul><li>Externally Visible Operational Soundness: </li></ul><ul><li>Impossible for someone to interfere with the system from the outside </li></ul><ul><li>(quickly detectable) </li></ul>Layers of the architecture
- 86. <ul><li>Convincing the Public: </li></ul><ul><li>Crucial for the success of the eVoting system </li></ul><ul><li>(details available to the public, organize campaigns etc) </li></ul>Layers of the architecture
- 87. <ul><li>Scientific Soundness: </li></ul><ul><li>Crypto-based justification of all components </li></ul><ul><li>(e.g. cryptographically secure random number generators, homomorphic functions) </li></ul>Layers of the architecture
- 88. <ul><li>Privacy: </li></ul><ul><ul><li>only the final result is made public, no additional information about votes will leak. </li></ul></ul><ul><li>Robustness: </li></ul><ul><ul><li>the result reflects all submitted and well-formed ballots correctly, even if some voters and/or possibly some of the entities running the election cheat. </li></ul></ul><ul><li>Universal verifiability: </li></ul><ul><ul><li>after the election, the result can be verified by anyone. </li></ul></ul>Some basic requirements for a general e-Voting scheme
- 89. How to meet these requirements? <ul><li>we obviously need cryptographic techniques </li></ul><ul><li>but tamper resistant devices as well </li></ul><ul><li>and we need to provide </li></ul><ul><ul><li>appropriate protocols and mechanisms to meet these requirements </li></ul></ul><ul><ul><ul><li>which we will be discussing </li></ul></ul></ul><ul><ul><li>digital signatures to identify voters </li></ul></ul><ul><ul><li>data correctness and integrity proofs etc. </li></ul></ul>
- 90. Mixnets <ul><li>Mixnets </li></ul><ul><li>A mechanism for destroying the relationship between a voter and his vote through the application of consecutive vote permutations </li></ul><ul><li>Permutations without fixed points – derangements </li></ul><ul><li>Random walks in permutation groups: how many steps until the uniform distribution appears ( random walk mixing time)? </li></ul><ul><li>Votes are fully decrypted in the last step but their link to the voters has, now, disappeared </li></ul>Parallelizing efficiently the process, we conjecture, is P-complete (reduction from CVP) : Given n inputs in some particular order, is the i let to output j after the application of all the permutation stages of the Mixnet?
- 91. Homomorphic functions <ul><li>Homomorphic functions </li></ul><ul><li>Another mechanism for destroying the relationship between voter and his vote – based on homomorphic functions (i.e. ElGamal encryption!) </li></ul><ul><li>Based on the computational difficulty in inverting these functions </li></ul><ul><li>Votes are never decrypted by they are added, homomorphically, in their encrypted form! </li></ul><ul><li>The vote outcome is in encrypted form too and needs to be decrypted (this is not hard since the number of voters is usually small and a brute force inversion suffices – also use of Pollard Ρ ho, Baby-step-giant-step etc.) </li></ul>Efficient parallelization :
- 92. Registering voters <ul><li>It is note imperative that we have an independent X.509 PKI system in place (if a PKI is available, that’s fine!) </li></ul><ul><li>But we will assume we have an existing registration scheme in place </li></ul><ul><li>Thus, we can simply send something out to a voter by mail, like a PIN-mailer </li></ul><ul><ul><li>which he may use for electronic registration </li></ul></ul><ul><ul><li>at which stage a public key pair is generated for his use, and the private key is stored securely in a central server </li></ul></ul><ul><ul><ul><li>all using HSMs </li></ul></ul></ul><ul><ul><ul><li>the private key never leaves the HSM controlled environment </li></ul></ul></ul>
- 93. <ul><li>This registration could take place </li></ul><ul><ul><li>at home from the voter’s own work station </li></ul></ul><ul><ul><li>or at a polling station </li></ul></ul><ul><ul><ul><li>where he presents a fairly traditional voting card received in the mail for proper identification and counting </li></ul></ul></ul><ul><ul><ul><li>and uses an additional small slip with a PIN or similar to vote, as in the vote home scenario </li></ul></ul></ul><ul><ul><li>using the PIN for identification </li></ul></ul>
- 94. Counting the votes <ul><li>Let alone the issues of anonymity etc., </li></ul><ul><ul><li>adding up votes electronic could be virtually instant </li></ul></ul><ul><li>In order to meet some of all our requirements, it would be extremely useful with the following property </li></ul><ul><ul><li>Given any two votes, m 1 and m 2 , and their encryption, P(m 1 ), P(m 2 ), assume </li></ul></ul><ul><ul><li>P(m 1 )+P(m 2 ) =P(m 1 +m 2 ), </li></ul></ul><ul><ul><ul><li>even better, if we can “randomise” to anonymise using individual random numbers r i for each vote, and we have the property </li></ul></ul></ul><ul><ul><li>P(m 1 ,r 1 )+P(m 2 ,r 2 ) =P(m 1 +m 2 ,R) </li></ul></ul><ul><ul><li>for some number R (actually, R=r 1 +r 2 ), then </li></ul></ul>
- 95. Counting by exploiting the homomorphism property <ul><li>we call P(.,.) a homomophic public key if: </li></ul><ul><li>for any set of votes, there always exist some R (which will vary with the votes) with </li></ul><ul><li>∑ P(x i ,r i ) = P(∑x i ,R) </li></ul><ul><li>Now we have it (assuming that such a function exists, of course!): </li></ul><ul><ul><li>the voter </li></ul></ul><ul><ul><ul><li>casts the electronic vote x </li></ul></ul></ul><ul><ul><li>the application </li></ul></ul><ul><ul><ul><li>chooses a random number r and calculates P(x,r) </li></ul></ul></ul><ul><ul><ul><li>signs and forwards S A (P(x,r)) </li></ul></ul></ul><ul><ul><li>the authenticating server </li></ul></ul><ul><ul><ul><li>verifies the signature and forwards P(x,r) for counting </li></ul></ul></ul><ul><ul><li>the counting server </li></ul></ul><ul><ul><ul><li>calculates ∑P(x i ,r i ) = P(∑x i ,R) and descrypts to recover ∑x i , while R is discharged </li></ul></ul></ul><ul><ul><li>the result is available less than 1 minute after the closing of the polling stations </li></ul></ul>
- 96. CGS97 - The Protocol
- 97. CGS97 - The Protocol <ul><li>Initialization </li></ul><ul><ul><li>All authorities publish </li></ul></ul><ul><ul><ul><li>Their shares. </li></ul></ul></ul><ul><ul><ul><li>A threshold public key S . </li></ul></ul></ul><ul><ul><ul><li>Another generator h of the multiplicative group </li></ul></ul></ul><ul><ul><ul><ul><li>The legal votes will be h -1 , h 1 . </li></ul></ul></ul></ul><ul><li>Voting </li></ul><ul><ul><li>A voter encrypts his vote b i using E(h bi ,S;r) and publishes it along with a non-interactive proof of validity of the vote on a public board. </li></ul></ul><ul><li>Verification </li></ul><ul><ul><li>All voter's non interactive proofs are verified (publicly) and invalid votes are deleted. </li></ul></ul>
- 98. <ul><li>Tallying </li></ul><ul><ul><li>After elections ends, t authorities calculates E(h total ,S;r total ) = E(h bi ,S;r) and publicly decrypt it to get h total . Now, anyone can find Total (using linear time exhaustive search) which is the difference between the number of votes for each candidate. Those calculation can also be verified using non-interactive zero knowledge proof of equality of discrete logarithms. </li></ul></ul>
- 99. More on Scientific Soundness: Randomness <ul><li>Cryptographically strong pseudorandom generators: </li></ul><ul><li>1. Generators based on number theoretic problem (BBS, RSA/Rabin, Discrete Log) </li></ul><ul><li>2. Generators employing symmetric (block) ciphers or secure hash functions (DES, AES, SHA, MD5) </li></ul><ul><li>In order to confuse cryptanalysts the generation process can periodically use different combination of algorithms. </li></ul><ul><li>shuffling algorithms (algorithm M and B) </li></ul><ul><li>XOR operation </li></ul>
- 100. More on Scientific Soundness: Randomness <ul><li>Physical random number generators: </li></ul><ul><li>1. The seed of any software random number generator must be drawn from a source of true randomness. </li></ul><ul><li>2. Combine more than one such generators to avoid problems if some of the generators fail (for example with XOR). </li></ul><ul><li>3. Use pseudorandom function (Naor-Reingold) for processing the combination of the seeds. </li></ul>
- 101. <ul><li>Implementation Soundness: </li></ul><ul><li>Formal methodology for the verification of the implementation </li></ul><ul><li>(applied periodically) </li></ul>Layers of the architecture
- 102. Implementation Soundness <ul><li>The theoretically established cryptographic security by itself disappears if a simple implementation error occurs in the implementation code. </li></ul><ul><li>Testing the implementation is a crucial step in building a secure and trustworthy electronic eVoting system. </li></ul>
- 103. Implementation Soundness <ul><li>There is a number of verification methodologies and tools that can be applied, that are based on various statistical tests. </li></ul>
- 104. <ul><li>Methodology for security risk analysis </li></ul><ul><li>C ustomised language for threat and risk modelling (UML based) + extended documentation (diagrams, tables) </li></ul><ul><li>Provides detailed guidelines </li></ul><ul><ul><li>Context identification </li></ul></ul><ul><ul><li>Risk identification </li></ul></ul><ul><ul><li>Risk Analysis </li></ul></ul><ul><ul><li>Risk Evaluation </li></ul></ul><ul><ul><li>Risk Treatment </li></ul></ul><ul><li>Proposes different tools and techniques for each step </li></ul><ul><li>+ software tool to integrate tools and document results </li></ul><ul><li>http://coras.sourceforge.net/ </li></ul>The CORAS Methodology Risk Analysis and Management ( 2/11)
- 105. <ul><li>Context Identification </li></ul><ul><ul><ul><li>Application scenario, assets, data flows </li></ul></ul></ul><ul><ul><ul><li>UML modeling language </li></ul></ul></ul><ul><li>Risk Identification </li></ul><ul><ul><ul><li>Identification of threats </li></ul></ul></ul><ul><ul><ul><li>Threat Diagrams </li></ul></ul></ul><ul><ul><ul><li>HazOp Analysis </li></ul></ul></ul><ul><ul><ul><li>Fault Tree Analysis </li></ul></ul></ul><ul><li>Risk Analysis </li></ul><ul><ul><ul><li>Specification of Likelihood, Consequence and Risk levels </li></ul></ul></ul><ul><ul><ul><li>Assessment of risks ( Likelihood of occurrence and Consequence ) </li></ul></ul></ul><ul><ul><ul><li>- Qualitative </li></ul></ul></ul><ul><ul><ul><li>- Quantitative ( through Fault Tree Analysis ) </li></ul></ul></ul><ul><li>Risk Evaluation </li></ul><ul><ul><ul><li>Risk categorization matrix </li></ul></ul></ul><ul><li>Risk Treatment </li></ul><ul><ul><ul><li>Countermeasures for critical risks </li></ul></ul></ul>Risk Analysis and Management ( 2/11) Basic steps of CORAS
- 106. Step 1: Context Identification Risk Analysis and Management ( 3/11) Abstract Class Diagram Activity Diagram Use Case DIagram
- 107. Step 1 ( continues ) Risk Analysis and Management (4/11) Example of Time Sequence Diagram (Decryption and Calculation of Result)
- 108. Step 2: Risk Identification Risk Analysis and Management (5/11) Part of high-level risk table Who/what causes it? How? What is the incident? What does it harm? What makes it possible? Keyholders Disclosure of secret keys Corrupted Keyholders (software ) Voter Disclosure of credentials ( id , password , πιστοποιητικό) to another person Malicious Voter EA Vote Alteration Corrupted ΕΑ EA Vote disclosure Corrupted ΕΑ EA Tallying error Software Error EA Result Alteration Corrupted ΕΑ Coercer Voter coercing Lack of monitoring during remote vote casting Hacker Vote Alteration Insufficient Security Hacker Final result Alteration Insufficient Security
- 109. Step 2 ( continues ) Risk Analysis and Management (6/11) Part of HazOp Table Asset : Keys Κ i (step 1) Guideword Threats Likelihood Consequence Countermeasures Manipulation Alteration of key generator operation by authorized person Small Keys are not secret or are not random Testing of key generator before elections Restricted access to software Disclosure Disclosure of some K i by their holders Medium Corruption in elections is possible Key sharing (k out of k). In order for the overall Key to be disclosed, all keyholders need to disclose their keys Programming Ε rrors Errors in generator software Medium The keys are not randomly generated ( fake randomness ). The keys do not satisfy the requirements (e.g. length) Application of good programming practices. Extensive testing and debugging. Use of secure random number generators
- 110. Step 2 ( Continues ) Risk Analysis and Management ( 7/11) Fault Tree Diagram (ITEM Toolkit)
- 111. <ul><li>Assessment of likelihood of occurrence of unwanted incidents </li></ul>Step 3: Risk Analysis Risk Analysis and Management (8/11) Calculation of threat occurrence likelihood Event Description Likelihood Disclosure by Voter 1 Disclosure of Vote by Voter 0,05 2 Voter software error 0,1 3 Malicious software in Voter’s PC 0,1 Stolen while in transit 4 SSL failure 0,1 Disclosure by Vote Manager 5 Malicious Election Authority ( vote manager ) 0,05 6 Malicious software in Election Authority ( vote manager ) 0,05 Threat ID Description Events involved Likelihood 1 Disclosure of vote Μ 1-6 0,3 8 (Medium )
- 112. Step 3 ( Continues ) Risk Analysis and Management (9/11) Qualitative assessment of Consequence using FMEA ID Function/ Entity Failure Mode Effects Causes Consequences Local System wide 1 GenerateElGamalParameters (size) Size parameter is not available in system config file The public parameters may not be created System initialization is not possible Config file is not properly updated by system administrator. Access to config file/database is not possible Voting process may not begin 2 Publish(elGamalParameters) Bulletin Board is not updated with the public parameters Keyholders may not produce keys System initialization is not possible Connection to database is not possible Voting process may not begin
- 113. Step 4: Risk Assessment Risk Analysis and Management (10/11) Risk Categorization Matrix Consequence Value Likelihood Value Rare Unlikely Possible Likely Certain Insignificant Minor 4, 10, 12, 30, 31 29, 32, 34, 35, 36, 39, 40 14 Moderate 3 8, 22 Major 1, 9, 21, 23, 26, 27 7, 17 , 20, 24, 25, 28, 33, 37 13 Catastrophic 2, 5, 11, 47 6, 15, 16, 18, 19, 41, 43, 44, 45, 46 38, 48, 49 42
- 114. Step 5: Risk Treatment (taken into account in the design/implementation phases) Risk ID Description Risk Level Treatment options - measures Risks with regard to Partial Keys disclosure or non-availability 2 Disclosure of some of K i by their keyholders Extreme The disclosure of partial keys would be catastrophic, as it would allow the decryption of individual votes and the final result by unauthorized parties (or even the EA) Threshold cryptography techniques are used as a countermeasure. Such techniques require for at least t out of n keyholders to cooperate for the conduction of the elections. Moreover, colluding interests of the keyholders discourage potential alliances among them. For ultimate security, we suggest that t=n, which means that all keyholders need to cooperate. 5 Some of the K i are not available Extreme
- 115. Layers of the Architecture <ul><li>Internal Operational Soundness: </li></ul><ul><li>High availability and fault tolerance </li></ul><ul><li>(self-auditing, self-checking, self-recovery from malfunction) </li></ul>
- 116. Internal Operation Soundness <ul><li>One of the most important issues in an eVoting application is the ability to self-check its internal operation and give warnings when needed. </li></ul><ul><li>Self-checking reduces human intervention and increases the responsibility of the system in case of a non-normal operation. </li></ul><ul><li>Self-checking approaches include: Intrusion Detection Systems, hardware-based software bootloaders for secure start-up (embedded systems) </li></ul>
- 117. Internal Operation Soundness <ul><li>All the internal activity of the system must be supervised by authorized personnel. </li></ul><ul><li>A personnel security plan must be deployed so that every person in the eVoting is responsible for a different action. </li></ul><ul><li>The computer room where the servers are kept must be isolated: </li></ul><ul><li>1. Biometric access control system is needed. </li></ul><ul><li>2. The access control system must use cameras and movement detectors. </li></ul>
- 118. Layers of the Architecture <ul><li>Externally Visible Operational Soundness: </li></ul><ul><li>Impossible for someone to interfere with the system from the outside </li></ul><ul><li>(quickly detectable) </li></ul>
- 119. Externally Visible Operational Soundness <ul><li>It should be possible to detect erratic behavior or ascertain that everything is as expected: </li></ul><ul><li>Detect some frequently eVoting system failures and attacks as fast as possible. </li></ul><ul><li>Possible failures and attacks: </li></ul><ul><li>Failure of a random number generator </li></ul><ul><li>System database damage </li></ul><ul><li>Forging votes </li></ul><ul><li>“ Bogus” voting servers </li></ul>
- 120. <ul><li>Operational physical security: </li></ul><ul><li> system operators’ actions should be subjected to monitoring and logging </li></ul><ul><li>visual monitoring of the system and strict access control </li></ul><ul><li>strict maintenance process for modifications of any part of the system is needed </li></ul><ul><li>Forging votes: </li></ul><ul><li>not possible – no double or non-authenticated votes are accepted by the system </li></ul>Externally Visible Operational Soundness
- 121. <ul><li>“ Bogus” servers: </li></ul><ul><li>the system should be protected from intrusions </li></ul><ul><li>a third party is needed to operate as a firewall between the servers and the vote database </li></ul><ul><li>The third party (central Election Authority): </li></ul><ul><li>1. Responsible for monitoring the operation of the voting servers. </li></ul><ul><li>2. Re-tallying to make sure that local EAs have valid local tallies </li></ul><ul><li>3. Analyze IDS information </li></ul>Externally Visible Operational Soundness
- 122. <ul><li>Convincing the Public: </li></ul><ul><li>Crucial for the success of the eVoting system </li></ul><ul><li>(details available to the public, organize campaigns etc) </li></ul>Layers of the architecture
- 123. <ul><li>“ Reassure the public that all measures have been taken in order to produce an error-free, secure and useful application.” </li></ul><ul><li>Such measures include: </li></ul><ul><li>1. Trust by increasing awareness (educate the public about security and data protection issues in non technical terms). </li></ul><ul><li>2. Trust by continual evaluation and accreditation (continual evaluation and certification of system’s operation, results of the evaluation publicly available). </li></ul><ul><li>3. Trust by independence of evaluators (the system must be verified by experts outside the organization). </li></ul><ul><li>4. Trust by open challenges (call for hackers). </li></ul>Layers of the architecture
- 124. 5. Trust by extensive logging and auditing of system activities (logging and auditing activities are scheduled on daily basis, results available for public scrutiny). 6. Trust by contingency planning (failures in system that offer e-services are not acceptable, contingency plan publicly available). 7. Trust by regulation and laws (system operator introduces suitable legislation for the protection of the public in case of mishaps). 8. Trust by reputation and past experience (the involvement of engineers and experts should be accompanied by credentials that prove their expertise). Convincing the public
- 125. <ul><li>Bouncy Castle Java crypto library </li></ul><ul><li>OpenCA </li></ul><ul><li>OpenVPN </li></ul><ul><li>Apache Tomcat </li></ul><ul><li>SSL </li></ul><ul><li>NTP for obtaining time </li></ul><ul><li>PostgreSQL </li></ul><ul><li>HELENA IDS </li></ul><ul><li>Hardware RNGs for seeding </li></ul><ul><li>ATMEL’s ATMega8 microcontroller for secure bootstrapping of parameters and startup code </li></ul>System and implementation related aspects
- 126. Application server: Apache Tomcat <ul><li>Application Tier of the Election Authorities (EAs) </li></ul><ul><li>Execution of Java servlets ( servlet container ) </li></ul><ul><li>Responsible for: </li></ul><ul><ul><li>The presentation of the web interfaces to voters who connect to the EA </li></ul></ul><ul><ul><li>The recognition of the web page for which a request for an http (or https) connection was made by a voter’s web browser (supported web browsers include: Internet Explorer , Mozilla Firefox , Netscape Navigator , Opera , and Safari) </li></ul></ul><ul><ul><li>The identification and activation of the requested page, including the activation of all Java scripts linked to it (Tomcat has an internal compiler that transforms Java servlets into Java Server Pages, which are suitable for presentation by a voter’s web browser) </li></ul></ul><ul><ul><li>The execution of the requests contained in the servlets (e.g. PostrgreSQL requests) </li></ul></ul><ul><ul><li>The implementation of the secure https connections through the activation of the SSL module ( mod _ ssl ) </li></ul></ul><ul><ul><li>The activation of load balancing support ( JK native connector ) </li></ul></ul>
- 127. Intrusion Detection System: HELENA <ul><li>Developed by RACTI </li></ul><ul><li>Constantly gathers and analyzes incoming and outgoing traffic from a target network (the network with the central EAs in our case) </li></ul><ul><li>Local computer agent </li></ul><ul><li>Master console agent </li></ul><ul><li>“ Not-used” request database </li></ul><ul><li>Threshold values – updates: target network is modeled with a directed graph with connections (vertices: computers + ports, edges: connection requests ) </li></ul>
- 128. Voter authentication: OpenCA <ul><li>Used for the identification of legal voters </li></ul><ul><li>Was installed to operate with Linux Ubuntu 6.10 ( Edgy Eft ) </li></ul><ul><li>Implementation of a Certification and a Registration Authority (CA and RA) </li></ul><ul><li>CA and RA operate at the same server and use a PostrgreSQL </li></ul><ul><li>The voter submits a request for the receipt of a certificate – if entitled to vote, the certificate is issued and the user installs it in the web browser. Then the voter is allowed to access the local EA </li></ul><ul><li>The Apache Tomcat receives and validates the certificates using SSL-based authentication protocols </li></ul>
- 129. Ensuring privacy in the network: OpenVPN <ul><li>Installed at the Central EAs using the client – server model: </li></ul><ul><ul><li>The VPN server has a static IP address and is accessible from the Internet. If the VPN server is behind NAT (Network Address Translation) then the NAT router should be configured to rout traffic directed to the connection port of OpenVPN (default 1194 udp) to the VPN server. </li></ul></ul><ul><ul><li>After the installation of the OpenVPN, certificates are constructed that allow clients (i.e. Local EAs) to request VPN connections. </li></ul></ul><ul><ul><li>After installing their certificates, the clients can request and establish secure VPN connection from the VPN server </li></ul></ul>
- 130. High availability and fault tolerance: mon, heartbeat, and coda (1/2) <ul><li>The " mon ", " heartbeat ", and " coda " tools from Linux Virtual Server </li></ul><ul><li>Mon is a monitor of the state of the servers and the network, heartbeat sends frequent signals so as to signify the availability of the servers, and coda implements a fault tolerant distributed file storage system (actually implemented by Slony-I in our case – see below) </li></ul><ul><li>There is also fake , which is an IP take-over module that employs ARP spoofing </li></ul>
- 131. High availability and fault tolerance: mon, heartbeat, and coda (1/2)
- 132. Database replication: Slony-I (1/2) <ul><li>An asynchronous data replication platform (with periodic updates) for PostgreSQL that supports cascading and failover . </li></ul><ul><li>It creates a cluster of local databases (in our case, the local databases of votes in each Local EA and in the Central EAs) </li></ul><ul><li>It creates mirrors, at a master database, of databases kept at slave databases </li></ul>
- 133. Database replication: Slony-I (2/2)
- 134. Heartbeat and Slony-I: An architecture for high availability and fault tolerance
- 135. Secure EA bootstrapping: MCUs with protected memory <ul><li>Secure storage of keys, voting parameters and bootstrapping code </li></ul><ul><li>Secure code execution and authentication of external applications </li></ul><ul><li>Low cost and easy to develop solution (as opposed to TPM based ones) that easily fits legacy hardware and software </li></ul><ul><li>New version of code and new keys can be dispatched over any insecure communication means in encrypted form – decryption takes place within the MCU </li></ul>
- 136. Performance aspects/ System simulation Network architecture: Directed Acyclic Graph (DAG) Traffic: open Jackson network of M/M/1 queues (Poisson distributed arrival rate – exponentially distributed service rate – one server – unlimited queue size) Voters’ arrival behavior: Weibull distributed with a peak around noon Simulation tool: Uses the CSIM 19 (C and C++) simulation library
- 137. Performance aspects/ System simulation Shifted Weibull distribution with parameters α = 2.5, b = 5 and t 0 = 8 Time interval λ s i [8:00,10:00) 5.67 [10:00,12:00) 10.32 [12:00,14:00) 6.70 [14:00,16:00) 2 [16:00,18:00) 0.26 [18:00,20:00) 0.026 Time interval s i (incoming vote rate) [8:00,10:00) 0.11 [10:00,12:00) 0.20 [12:00,14:00) 0.13 [14:00,16:00) 0.039 [16:00,18:00) 0. 005 [18:00,20:00) 0. 0005
- 138. Performance aspects/ System simulation
- 139. Summary <ul><li>We have presented a general, trust-centered, layered approach towards trust building in eVoting and, generally, eGovernment applications. </li></ul><ul><li>This approach is based on a design process that incorporates risk analysis/management methodologies for security critical systems (e.g. CORAS) </li></ul><ul><li>Large scale simulation results to evaluate the architecture’s efficiency as a function of the voter population size </li></ul><ul><li>Evaluated during a mock-up election for the members of the Western Greece sector of the Technical Chamber of Greece – useful feedback, that was incorporated in the current version of the eVoting platform </li></ul><ul><li>Project site: www.pnyx.cti.gr </li></ul>
- 140. Elliptic Curve Cryptography <ul><li>Based on groups which are defined on elliptic curves. </li></ul><ul><li>Elliptic Curve: </li></ul><ul><li>Defined over a prime ( F p ) or a binary field </li></ul><ul><li>EC over F p (E( F p )): set of solutions ( x,y ) in F p to </li></ul><ul><li>along with a special point denoted by О , called the point at infinity . </li></ul>
- 141. Example <ul><li> y 2 = x 3 - 4x + 3 solutions ( x,y ) in F 23 </li></ul><ul><li>Q F 23 </li></ul>
- 142. Generation of a key pair (private-public) <ul><li>Conventional Cryptosystems </li></ul><ul><li>based on F p </li></ul><ul><li>1. Choose at random a private </li></ul><ul><li>key d {1, p -1} </li></ul><ul><li>2. Find a generator g of the field </li></ul><ul><li>3. Calculate the public key </li></ul><ul><li>e = g d mod p </li></ul>Elliptic Curve Cryptosystems based on F p 1. Choose at random a private key d {1,m-1} 2. Find a random point G on the EC 3. Calculate the public key e = dG mod p
- 143. EC Cryptosystems vs. Conventional Systems <ul><li> Same level of security: N M 1/3 (ln(Mln2)) 2/3 ) </li></ul>
- 144. Advantages of ECC <ul><li>More Efficient (smaller parameters) </li></ul><ul><li>Faster </li></ul><ul><li>Less Power and Computational Consumption </li></ul><ul><li>Cheaper Hardware (Less Silicon Area, Less Storage Memory) </li></ul>
- 145. Generation of secure ECs <ul><li>Cryptographic Strength suitable order m </li></ul><ul><li>Suitable order </li></ul><ul><li>m = nq where q a prime > 2 160 </li></ul><ul><li>m p </li></ul><ul><li>p k ≢ 1 (mod m ) for all 1 k 20 </li></ul><ul><li>The above conditions guarantee resistance to all known attacks to solve ECDLP </li></ul>
- 146. Generation of ECs <ul><li>The goal is to determine the defining parameters of an EC: </li></ul><ul><li>y 2 = x 3 +ax + b </li></ul><ul><li>The order p of the finite field F p . </li></ul><ul><li>The order m of the elliptic curve. </li></ul><ul><li>The coefficients a and b. </li></ul>
- 147. Generation of ECs-Known Methods <ul><li>Constructive Weil descent </li></ul><ul><li>Samples from a, rather, limited subset of ECs. </li></ul><ul><li>Point counting </li></ul><ul><li>Rather slow </li></ul><ul><li>The Complex Multiplication method </li></ul><ul><li>Rather involved, but efficient for generating secure ECs. </li></ul>
- 148. The Complex Multiplication Method Input:an integer D Calculate the Hilbert polynomial H D (x) YES Is one of them suitable? Choose prime p = x 2 +Dy 2 and find integers (x,y) Possible orders: m = p+1 2x NO Calculate the roots of the Hilbert polynomial From every root generate a pair of ECs Find the EC which has order m
- 149. Shortcomings of the CM method <ul><li>Time consuming construction of Hilbert polynomials as D increases – huge polynomial coefficients </li></ul><ul><li>Need for improvements, especially for hardware devices where memory and speed are limited resources </li></ul>
- 150. A practical approach <ul><li>A variant of the CM method </li></ul><ul><li>On line computation (or precomputation) of Weber polynomials </li></ul><ul><li>Roots of these polynomials can be transformed into the roots of the corresponding Hilbert polynomials, but no Hilbert polynomial is actually constructed </li></ul><ul><li>But why use Weber polynomials? </li></ul>
- 151. Weber vs. Hilbert Polynomials <ul><li>The construction of both types of polynomials requires high precision complex, floating point arithmetic. </li></ul><ul><li>Drawback of Hilbert polynomials: their fast growing (with D ) coefficients - time consuming construction and difficult to implement in limited resources devices. </li></ul><ul><li>Weber polynomials on the other hand, have much smaller coefficients. </li></ul>
- 152. An Example ( D = 292) <ul><li>W 292 ( x ) = x 4 - 5 x 3 - 10 x 2 - 5 x + 1 </li></ul><ul><li>H 292 ( x ) = x 4 - 2062877098042830460800 x 3 - </li></ul><ul><li>93693622511929038759497066112000000 x 2 + </li></ul><ul><li>45521551386379385369629968384000000000 x </li></ul><ul><li>380259461042512404779990642688000000000000 </li></ul>
- 153. Implementation <ul><li>Algorithms for the basic algebraic operations </li></ul><ul><li>Generation of secure ECs </li></ul><ul><li>EC Protocols </li></ul><ul><li>Implemented: </li></ul><ul><li>in ANSI C using the GNU Multiple Precision Library </li></ul>
- 154. Implementation Considerations <ul><li>Choice of prime fields:simplicity in number representation and in basic algebraic operations. </li></ul><ul><li>GNUMP had to be enhanced to include: </li></ul><ul><li>high-precision implementation of useful functions (factorization, primitive root location, etc) </li></ul><ul><li>high-precision complex number arithmetic </li></ul><ul><li>high-precision floating point arithmetic of various functions, e.g. cos(x), sin(x), exp(x), ln(x), arctan(x) </li></ul><ul><li>[Taylor series expansion suitable truncated] </li></ul>
- 155. Architecture
- 156. Architecture
- 157. Attacks on ECC <ul><li>The security of ECC is based on the difficulty of solving ECDLP (Elliptic Curve Discrete Logarithm Problem). </li></ul><ul><li>ECDLP: find m for which Q=mP , where Q,P are two known points on the EC. </li></ul><ul><li>An attack on ECC is an algorithm for solving ECDLP exponential time </li></ul>
- 158. Signatures: from “syntax” to “semantics” <ul><li>A bit-sequence may be looked upon from two different aspects: </li></ul><ul><ul><li>Its pattern (i.e. its “syntax”): this is simply the sequence of 0s,1s </li></ul></ul><ul><ul><li>Its content (i.e. its “semantics”): the string may represent some other object (e.g. a Boolean formula, a graph, or an automaton under a suitable encoding) </li></ul></ul><ul><li>We could use the knowledge of a property of the object represented by a bit-sequence in order to prove that we have created or own the sequence </li></ul><ul><li>If this knowledge is hard to come up with or to deduce then </li></ul><ul><li>Knowledge of the property of the object (bit-sequence) </li></ul><ul><li>= </li></ul><ul><li>Proof of identity </li></ul><ul><li>The tools are already here: Computational complexity & Threshold phenomena ! </li></ul>
- 159. The methodology <ul><li>Find a class of objects and identify some property of theirs such that </li></ul><ul><ul><li>It is hard to deduce or compute it if not known in advance </li></ul></ul><ul><ul><li>It is easy to construct an object having the property </li></ul></ul><ul><ul><li>TOOL: Combinatorial threshold phenomena </li></ul></ul><ul><li>Construct an “ownership proof” procedure with which you can prove knowledge of the property without divulging it </li></ul><ul><ul><li>TOOL: Zero Knowledge Interactive Proofs (ZKIPs) </li></ul></ul><ul><li>Use suitably produced objects encoded as bit-sequences as signatures! </li></ul>
- 160. The 3-coloring problem <ul><li>We are given an undirected graph </li></ul><ul><li>We are asked to color the vertices of the graph using at most 3 colors so that no two adjacent vertices are assigned the same color </li></ul>1 2 3 4 5 1 2 3 4 5
- 161. The complexity of 3-coloring <ul><li>The founders of modern complexity theory: Cook (1971), Karp (1972), and Levin (1973) – Computational Complexity – SAT: the “ drosophila ” of complexity </li></ul><ul><li>3-Coloring, like SAT, is computationally intractable (technically, NP-complete ) – thousands of other problems share this property! </li></ul><ul><li>This means that if we are given a graph and ask to find a 3-coloring of its vertices, the number of steps required may be prohibitively large. Thus, 3-colorings graphs are hard to find. </li></ul>Use bit-sequences that represent graphs and proof of ownership is equivalent to the ability to exhibit readily a 3-coloring of the graph IDEA:
- 162. The “hard”-instance region for 3-coloring <ul><li>G: a graph with m edges and n vertices with r the ratio m/n. </li></ul><ul><li>Cheeseman, Kanefsky, and Taylor [1991]: for values of r around 2.3, randomly generated graphs with rn edges were either almost all 3-colorable or almost none 3-colorable depending on whether r < 2.3 or r > 2.3 respectively. </li></ul><ul><li>Thus, we have a transition from almost certain 3-colorability to almost certain non 3-colorability. </li></ul><ul><li>And what is more, graphs with ratio r around the value r 0 = 2.3 were the most difficult to handle by the best of algorithms! </li></ul><ul><li>This, implies, that one can use such graphs to create graphs whose colorings are hard to find! </li></ul>
- 163. Threshold phenomena in other problems: 3-SAT Many combinatorial problems exhibit a threshold behavior: Instances generated with their critical parameter (clause/variable ratio in 3-SAT) around the value (4.2 in 3-SAT) that marks the transition from almost certain solubility (satisfiability in 3-SAT) to almost certain insolubility , seem to be among the hardest to solve with the best of algorithms available PROBLEM: Proof of existence and calculation of the critical value
- 164. Producing random 3-colorable graphs <ul><li>Let p 1 , p 2 , and p 3 be real numbers such that p 1 + p 2 + p 3 = 1 and p 1 , p 2 , and p 3 > 0. </li></ul><ul><li>For each j = 1, …, n, vertex v j is assigned to color class C k with probability p k , k = 1, 2, 3. </li></ul><ul><li>For each pair u, v of vertices that do not belong to the same color class, introduce the undirected edge (u,v) with probability p. </li></ul><ul><li>The above algorithm is simple and very fast. It produces, a random graph with specified 3-coloring known only to the owner of the graph (i.e. the signature) </li></ul>
- 165. Targeting at the “hard” instances region <ul><li>Set r = E[m]/n (expected number of edges/number of vertices) </li></ul><ul><li>This gives </li></ul><ul><li>r = p(p 1 p 2 + p 1 p 3 + p 2 p 3 )n </li></ul><ul><li>Set r ≈ 2.3 and p 1 = p 2 = p 3 = 1/3 (color classes of equal size give, in general, more difficult instances) </li></ul><ul><li>Then solving for p, we obtain </li></ul>
- 166. Zero Knowledge Interactive Proof Protocols (ZKIP) <ul><li>Introduced by Goldwasser et al . (1985) and Babai (1985) </li></ul><ul><li>Convince someone of a piece of (generally) hard to acquire knowledge without disclosing it! </li></ul><ul><li>A “graphical” description of a ZKIP for 3-coloring: </li></ul><ul><ul><li>Secretly permute, at random, the 3 colors </li></ul></ul><ul><ul><li>Spread the graph on the floor with vertices hidden </li></ul></ul><ul><ul><li>The other party chooses at random a pair of adjacent vertices </li></ul></ul><ul><ul><li>Expose their colors, showing that they are, indeed, different </li></ul></ul><ul><ul><li>The above procedure is repeated until the other party is convinced that we really know the 3-coloring </li></ul></ul>
- 167. The “gory” details … <ul><li>Setting: G = ( V , E ) where a P rover knows a 3-coloring of G and a V erifier needs a proof of this knowledge ( Goldreich et al . (1991)) </li></ul><ul><li>P does the following (“commitment”) </li></ul><ul><ul><li>Chooses a random permutation π of {1,2,3} </li></ul></ul><ul><ul><li>For each v in V , applies the color permutation π and expresses the result using two binary bits k v ,0 and k v ,1 </li></ul></ul><ul><ul><li>Chooses two random values r v ,0 , r v ,1 ≤ | V |/2 </li></ul></ul><ul><ul><li>Computes (“ << ” is the “left shift” operator): </li></ul></ul><ul><ul><li>R v ,0 = RSA( << r v ,0 + k v, 0 ) and R v ,1 = RSA( << r v ,1 + k v ,1 ) </li></ul></ul><ul><ul><li>Sends to V {R v ,0 , R v ,1 for all v in V } </li></ul></ul>
- 168. <ul><li>Challenge by V : </li></ul><ul><ul><li>Selects an edge ( u , v ) at random and sends it to P </li></ul></ul><ul><li>Response by P : </li></ul><ul><ul><li>Sends out the RSA decrypt keys to V </li></ul></ul><ul><li>Checking by V : </li></ul><ul><ul><li>If the revealed colors are the same, V rejects. Otherwise, V accepts. </li></ul></ul>R 1,0 , R 1,1 1 P V RSA key u ,RSA key v P P R 2,0 , R 2,1 2 R n,0 , R n,1 n V V
- 169. Why the ZKIP for 3-coloring works? <ul><li>If we really did not know a 3-coloring (i.e. we tried to impersonate the legal owner) then at each interrogation by the other party there is some fixed probability r that a pair is not properly colored </li></ul><ul><li>The probability that for a sequence of n trials we will manage to fool the other party is at most (1- r ) n , which tends to 0 exponentially as r is a constant less than 1 </li></ul><ul><li>This means that we are doomed to get caught lying as the number of rounds gets larger and larger! </li></ul>
- 170. <ul><li>Completeness: If G is indeed 3-colorable, P knows a 3-coloring and both P and V follow the protocol, then V will be convinced that P knows a 3-coloring. </li></ul><ul><li>Soundness: If, now, P does not know a 3-coloring then P will fail on at least one edge (u,v) which P will have been colored illegally. V on the other hand, will pick such an edge with probability 1/|E| which can be brought arbitrarily close to 1 by repeating the protocol sufficiently many times </li></ul>More formally …
- 171. Current research efforts <ul><li>How to produce graphs that with high probability have a small number of colorings as solved 3-coloring instances (i.e. instances constructed to have a specific coloring) can have a very large number of additional colorings </li></ul><ul><li>Identify classes of hard 3-coloring instances </li></ul><ul><li>Give a partial effective characterization of hard instances – Instance Complexity stemming from work of Kolmogorov (1965), Solomonoff (1964), and Chaitin (1966) & Average Case complexity by Levin (1986) </li></ul><ul><li>Build an integrated smart card application that includes the ZKIP protocol for identity verification – do the same for the graph generation algorithm (i.e. signature construction algorithm) </li></ul><ul><li>Arrive at a standard </li></ul>

No public clipboards found for this slide

Be the first to comment