• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content







Total Views
Views on SlideShare
Embed Views



1 Embed 1

http://tuhindisplayboard.blogspot.com 1



Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment


    • INDEX Chapter No Title page No 1 Introduction about RSA…………………………. 1.1 Overview of encryption and decryption………… 1.2 Public - key Cryptography……………………… 1.3 Symmetric key cryptography…………………… 1.4 Digital signatures………………………………… 1.5 One-way function……………………………….. 1.5.1 Significance…………………………………….. 1.6 RSA Used for Authentication in Practice………. 1.7 Alternatives to RSA……………………………. 2 System architecture…………………………….. 2.1 Modules…………………………………………. 2.1.1 main module……………………………………. 2.1.2 Module int ex_gcd (int a, int b, Int n)……………
    • 2.1.3 Module long en_de (int base, int exp, int n)…… 2.2 Basic block diagram…………………………… 2.3 Output screen………………………………….. 3 Hardware and software requirements………… 3.1 Hardware……………………………….. 3.2 Hardware requirements………………………. 3.3 Software………………………………………. 3.4 Software requirements………………………. 4 Scope and future enhancement………………. 4.1 Length of key in RSA………………………. 4.2 1. Introduction about RSA RSA is a public-key cryptosystem for both encryption and authentication; Ron Rivest, Adi Shamir, and Leonard Adleman invented it in 1977. It works as follows: take two large primes, p and q, and find their product n = p*q; n is called the modulus. Choose a number, e, less than n and relatively prime to (p-1)*(q-1), which means that e and (p-1)*(q-1) have no common factors except 1. Find another number d such that (ed - 1) is divisible by (p-1)*(q-1). The values e and d are called the public and private exponents, respectively. The public key is the pair (n, e); the private key is (n, d). The factors p and q maybe kept with the private key, or destroyed. It is difficult (presumably) to obtain the private key d from the public key (n, e). If one could factor n into p and q, however, then one could obtain the private key d. Thus the security of RSA is related to the assumption that factoring is difficult. An easy
    • factoring method or some other feasible attack would "break" RSA. Here is how RSA can be used for privacy and authentication (in practice, the actual use is slightly different; RSA privacy (encryption): Suppose Alice wants to send a message m to Bob. Alice creates the ciphertext c by exponentiating: c = me mod n, where e and n are Bob's public key. She sends c to Bob. To decrypt, Bob also exponentiates: m = cd mod n; the relationship between e and d ensures that Bob correctly recovers m. Since only Bob knows d, only Bob can decrypt. RSA authentication: Suppose Alice wants to send a message m to Bob in such a way that Bob is assured that the message is authentic and is from Alice. Alice creates a digital signature s by exponentiating: s = md mod n, where d and n are Alice's private key. She sends m and s to Bob. To verify the signature, Bob exponentiates and checks that the message m is recovered: m = se mod n, where e and n are Alice's public key. Thus encryption and authentication take place without any sharing of private keys: each person uses only other people's public keys and his or her own private key. Anyone can send an encrypted message or verify a signed message, using only public keys, but only someone in possession of the correct private key can decrypt or sign a message. Page No 1 1.1 Overview of encryption and decryption Encryption is the process of turning a clear-text message (Plaintext) into a data stream which looks like a meaningless and random sequence of bits (ciphertext). The process of turning ciphertext back into plaintext is called decryption. Cryptography deals with making communications secure. Cryptoanalysis deals with breaking ciphertext, that is, recovering plaintext without knowing the key. Cryptology is a branch of mathematics which deals with both cryptography and cryptoanalysis. A cryptographic algorithm, also known as a cipher, is a mathematical function which uses plaintext as the input and produces ciphertext as the output and vice versa. All modern ciphers use keys together with plaintext as the input to produce ciphertext. The same or a different key is supplied to the decryption function to recover plaintext from ciphertext. The details of a cryptographic algorithm are usually made public. It is the key that the security of a modern cipher lies in, not the details of the cipher.
    • Symmetric algorithms use the same key for encryption and decryption. These algorithms require that both the sender and receiver agree on a key before they can exchange messages securely. Some symmetric algorithms operate on 1 bit (or sometimes 1 byte) of plaintext at a time. They are called stream ciphers. Other algorithms operate on blocks of bits at a time. They are called block ciphers. Most modern block ciphers use the block size of 64 bits. Public-key algorithms (also known as asymmetric algorithms) use two different keys (a key pair) for encryption and decryption. The keys in a key pair are mathematically related, but it is computationally infeasible to deduce one key from the other. These algorithms are called "public-key" because the encryption key can be made public.Anyone can use the public key to encrypt a message, but only the owner of the corresponding private key can decrypt it. Some public-key algorithms such as RSA allow the process to work in the opposite direction as well: a message can be encrypted with a private key and decrypted with the corresponding public key. If Alice (or anyone else) can decrypt a message with Bob's public key she knows that the message must have come from Bob because no one else has Bob's private key. Digital signatures work this way. Page No.2 1.2 Public - key Cryptography In the course of centuries the mankind used cryptography to protect information in the process of its transmission and storage. By the end of the 19th century these methods became the object of mathematical research. The branch of mathematics dealing with information security is traditionally called cryptology and consists of cryptography, concerned with creation of new algorithms and reasoning about their correctness, and cryptanalysis, the goal of which is thorough study of existing methods, often aiming at the actual breaking of the adversary’s secrets. Cryptography and cryptanalysis are closely connected together and linked to the practical needs. They are developed simultaneously by governmental institutions of many countries and by the international scientific community. There exist thousands of cryptographic systems, implemented both in hardware and software. Some of them require that the cryptographic principles of their functioning be secret, as, for example, the Clipper microchip, proposed by the USA government as the telecommunication standard. Others are based on widely known algorithms and only
    • a certain (usually comparatively small) amount of information, called (secret) key, must be kept secret. Most of software-implemented systems intended for general use are of the latter type, and we shall consider only systems of this kind from now on. The problem of breaking a system in question, i.e. obtaining or altering the protected information without knowledge of the key, is usually feasible provided that the breaking part has unlimited computational resources. From the mathematical point of view the reliability of a cryptographic system is determined by the complexity of the breaking problem regarded with respect to the actual computational resources of a potential adversary. From the administrative point of view the relation between the cost of potential breaking and the value of protected information must be taken into account. The mathematical study of reliability of cryptographic systems is difficult, because of the lack of universal mathematical notion of complexity. For this reason the reliability of most cryptographic systems is at the moment impossible not only to prove, but even to formulate adequately. Application of a cryptographic system is based, as a rule, on the results of years of practical cryptanalysis of similar systems, confirmed to a certain extent by some mathematical justification. This justification might reduce the problem of breaking the system to a certain problem in number theory or combinatory, the solution of which is considered impossible to obtain in practice. Page No.3 1.3 Symmetric key cryptography For a long time symmetric, or dual key scheme was the traditional method of cryptography. This scheme makes use of one and the same key to encode and decode the information. The encoding procedure consists of a sequence of actions on the source data using the key, the decoding procedure uses the same key to perform the inverse actions on the encoded information. To decode the encoded information without the key is supposed to be practically infeasible. If the information encoded in this way is transmitted via ordinary non-secure channel, both the sender and the receiver must have the same key, so that it becomes necessary to employ an additional secure channel to transmit the key. The system becomes fragile and administrative complications arise. The one-time pad method also belongs to the class of symmetric key algorithms. It consists in bit wise addition of the source information to a random stream of bits - the key . The length of the key must be equal to the length of the source information and each part of the key should be used only once, otherwise the text might be easily subjected to an unauthorized decoding. If the above requirements are satisfied, this algorithm is the only method which is theoretically impenetrable to cryptanalysis, even if the adversary has unlimited computational power. In spite of that, the one-time pad method is now
    • practically out of use because of the administrative complications related to generation, distribution and storage of super long keys. Another example of a symmetric key scheme is the DES (Data Encryption Standard) algorithm adopted on November, 23, 1976 as the USA official standard for the protection of unclassified information (see [S94],pp 219-243). The standard included an obligation to re-certify the algorithm every five years. The last re-certification was carried out in 1992. The opinion of experts now is that the algorithm may fail to be re- certified for the next five-year period due to significant achievements in the cryptanalysis of DES and emergence of some new symmetric key methods. Nevertheless, DES is still regarded as a cryptographically reliable algorithm and remains the most widespread symmetric key scheme. The Russian standard for symmetric key cryptography is defined by GOST 28147-89 Systems for information processing. Cryptographic protection. The algorithm of cryptographic transformation, which was adopted on July 1, 1990. As opposed to DES, it is pointed out that according to its capabilities, the standard does not restrict the degree of secrecy of protected information. In general the algorithm of GOST 28147 is similar to DES but there are substantial differences, as, for example, length of the key and interpretation of the contents of substitution boxes. While the DES substitution boxes are optimized in terms cryptographic reliability and explicitly defined by the standard, the contents of the GOST 28147 substitution boxes are secret element to be supplied in the established order. Page No.4 Taking into consideration that these contents are at the same time a long-standing key element common to a computer network and that the established order of supply does not necessarily include cryptographic optimization, this part of the standard is one of its weak points, which impedes implementation and does not promote cryptographic reliability. However, provided that the substitution boxes are assigned optimized values, cryptographic reliability of the algorithm is comparable with that of DES. Traditional cryptography is based on the sender and receiver of a message knowing and using the same secret key: the sender uses the secret key to encrypt the message, and the receiver uses the same secret key to decrypt the message. This method is known as secret-key or symmetric cryptography. The main problem is getting the sender and receiver to agree on the secret key without anyone else finding out. If they are in separate physical locations, they must trust a courier, or a phone system, or some other transmission medium to prevent the disclosure of the secret key being communicated. Anyone who overhears or intercepts the key in transit can later read, modify, and forge all messages encrypted or authenticated using that key. The generation,
    • transmission and storage of keys is called key management; all cryptosystems must deal with key management issues. Because all keys in a secret-key cryptosystem must remain secret, secret-key cryptography often has difficulty providing secure key management, especially in open systems with a large number of users. The concept of public-key cryptography was introduced in 1976 by Whitfield Diffie and Martin Hellman in order to solve the key management problem. In their concept, each person gets a pair of keys, one called the public key and the other called the private key. Each person's public key is published while the private key is kept secret. The need for the sender and receiver to share secret information is eliminated: all communications involve only public keys, and no private key is ever transmitted or shared. No longer is it necessary to trust some communications channel to be secure against eavesdropping or betrayal. The only requirement is that public keys are associated with their users in a trusted (authenticated) manner (for instance, in a trusted directory). Anyone can send a confidential message by just using public information, but the message can only be decrypted with a private key, which is in the sole possession of the intended recipient. Furthermore, public-key cryptography can be used not only for privacy (encryption), but also for authentication (digital signatures). Page No.5 1.4 Digital signatures To sign a message, Alice does a computation involving both her private key and the message itself; the output is called the digital signature and is attached to the message, which is then sent. Bob, to verify the signature, does some computation involving the message, the purported signature, and Alice's public key. If the result properly holds in a simple mathematical relation, the signature is verified as being genuine; otherwise, the signature may be fraudulent or the message might have been altered. Advantages and Disadvantages of Public-Key Cryptography Compared with Secret-Key Cryptography The primary advantage of public-key cryptography is increased security and convenience: private keys never need to transmitted or revealed to anyone. In a secret- key system, by contrast, the secret keys must be transmitted (either manually or through a communication channel), and there may be a chance that an enemy can discover the secret keys during their transmission.
    • Another major advantage of public-key systems is that they can provide a method for digital signatures. Authentication via secret-key systems requires the sharing of some secret and sometimes requires trust of a third party as well. As a result, a sender can repudiate a previously authenticated message by claiming that the shared secret was somehow compromised by one of the parties sharing the secret. For example, the Kerberos secret-key authentication system involves a central database that keeps copies of the secret keys of all users; an attack on the database would allow widespread forgery. Public-key authentication, on the other hand, prevents this type of repudiation; each user has sole responsibility for protecting his or her private key. This property of public-key authentication is often called non-repudiation. A disadvantage of using public-key cryptography for encryption is speed: there are popular secret-key encryption methods that are significantly faster than any currently available public-key encryption method. Nevertheless, public-key cryptography can be used with secret-key cryptography to get the best of both worlds. For encryption, the best solution is to combine public- and secret-key systems in order to get both the security advantages of public-key systems and the speed advantages of secret-key systems. The public-key system can be used to encrypt a secret key, which is used to encrypt the bulk of a file or message. Such a protocol is called a digital envelope, in the case of RSA. Public-key cryptography may be vulnerable to impersonation, however, even if users' private keys are not available. A successful attack on a certification authority will allow an adversary to impersonate whomever the adversary chooses to by using a public-key certificate from the compromised authority to bind a key of the adversary's choice to the name of another user. Page No.6 In some situations, public-key cryptography is not necessary and secret-key cryptography alone is sufficient. This includes environments where secure secret-key agreement can take place, for example by users meeting in private. It also includes environments where a single authority knows and manages all the keys, e.g., a closed banking system.
    • Since the authority knows everyone's keys already, there is not much advantage for some to be "public" and others "private." Also, public-key cryptography is usually not necessary in a single-user environment. For example, if you want to keep your personal files encrypted, you can do so with any secret-key encryption algorithm using, say, your personal password as the secret key. In general, public-key cryptography is best suited for an open multi-user environment. Public-key cryptography is not meant to replace secret-key cryptography, but rather to supplement it, to make it more secure. The first use of public-key techniques was for secure key exchange in an otherwise secret-key system; this is still one of its primary functions. Secret-key cryptography remains extremely important and is the subject of much ongoing study and research. Some secret-key cryptosystems are discussed in the sections on block ciphers and stream ciphers. Digital Signatures Helping Detect Altered Documents and Transmission Errors: A digital signature is superior to a handwritten signature in that it attests to the contents of a message as well as to the identity of the signer. As long as a secure hash function is used, there is no way to take someone's signature from one document and attach it to another, or to alter a signed message in any way. The slightest change in a signed document will cause the digital signature verification process to fail. Thus, public-key authentication allows people to check the integrity of signed documents. If a signature verification fails, however, it will generally difficult to determine whether there was an attempted forgery or simply a transmission error. Page No.7 1.5 One-way function
    • A one-way function is a mathematical function that is significantly easier to perform in one direction (the forward direction) than in the opposite direction (the inverse direction). It might be possible, for example, to compute the function in seconds but to compute its inverse could take months or years. A trap-door one-way function is a one- way function where the inverse direction is easy given a certain piece of information (the trap door), but difficult otherwise. 1.5.1 Significance of One-Way Functions for Cryptography Public-key cryptosystems are based on (presumed) trap-door one-way functions. The public key gives information about the particular instance of the function; the private key gives information about the trap door. Whoever knows the trap door can perform the function easily in both directions, but anyone lacking the trap door can perform the function only in the forward direction. The forward direction is used for encryption and signature verification; the inverse direction is used for decryption and signature generation In almost all public-key systems, the size of the key corresponds to the size of the inputs to the one-way function; the larger the key, the greater the difference between the efforts necessary to compute the function in the forward and inverse directions (for someone lacking the trap door). For a digital signature to be secure for years, for example, it is necessary to use a trap-door one-way function with inputs large enough that someone without the trap door would need many years to compute the inverse function. All practical public-key cryptosystems are based on functions that are believed to be one-way, but no function has been proven to be so. This means that it is theoretically possible that an algorithm will be discovered that can compute the inverse function easily without a trap door; this development would render any cryptosystem based on that one- way function insecure and useless. On the other hand, further research in theoretical computer science may result in concrete lower bounds on the difficulty of inverting certain functions, and this would be a landmark event with significant positive ramifications for cryptography Page No 8 1.6 RSA Used for Authentication in Practice
    • RSA Digital Signatures RSA is usually combined with a hash function to sign a message. Suppose Alice wishes to send a signed message to Bob. She applies a hash function to the message to create a message digest, which serves as a "digital fingerprint" of the message. She then encrypts the message digest with her RSA private key; this is the digital signature, which she sends to Bob along with the message itself. Bob, upon receiving the message and signature, decrypts the signature with Alice's public key to recover the message digest. He then hashes the message with the same hash function Alice used and compares the result to the message digest decrypted from the signature. If they are exactly equal, the signature has been successfully verified and he can be confident that the message did indeed come from Alice. If they are not equal, then the message either originated elsewhere or was altered after it was signed, and he rejects the message. With the method just described, anybody read the message and verify the signature. This may not be applicable to situations where Alice wishes to retain the secrecy of the document. In this case she may wish to sign the document then encrypt it using Bob's public key. Bob will then need to decrypt using his private key and verify the signature on the recovered message using Alice's public key. A third party can also verify the signature at this stage. In practice, the RSA public exponent is usually much smaller than the RSA private exponent; this means that the verification of a signature is faster than the signing. This is desirable because a message will be signed by an individual only once, but the signature may be verified many times. It must be infeasible for anyone to either find a message that hashes to a given value or to find two messages that hash to the same value. If either were feasible, an intruder could attach a false message onto Alice's signature. Hash functions such as MD5 and SHA have been designed specifically to have the property that finding a match is infeasible, and are therefore considered suitable for use in cryptography. One or more certificates may accompany a digital signature. A certificate is a signed document that binds the public key to the identity of a party. Its purpose is to prevent someone from impersonating someone else. If a certificate is present, the recipient (or a
    • third party) can check that the public key belongs to a named party, assuming the certifier's public key is itself trusted. Page No.9 1.7 Alternatives to RSA Many other public-key cryptosystems have been proposed, as a look through the proceedings of the annual Crypto, Eurocrypt, and Asiacrypt conferences quickly reveals. A mathematical problem called the knapsack problem was the basis for several systems, but these have lost favor because several versions were broken. Another system, designed by ElGamal, is based on the discrete logarithm problem. The ElGamal system was, in part, the basis for several later signature methods, including one by Schnorr which in turn was the basis for DSS, the Digital Signature Standard.The ElGamal system has been used successfully in applications; it is slower for encryption and verification than RSA and its signatures are larger than RSA signatures. In 1976, before RSA, Diffie and Hellman proposed a system for key exchange only; it permits secure exchange of keys in an otherwise conventional secret-key system. This system is in use today. Cryptosystems based on mathematical operations on elliptic curves have also been proposed, as have cryptosystems based on discrete exponentiation in the finite field GF(2n ). The latter are very fast in hardware. There are also some probabilistic encryption methods, which have the attraction of being resistant to a guessed ciphertext attack, but with possible data expansion. For digital signatures, Rabin proposed a system which is provably equivalent to factoring; this is an advantage over RSA, where one may still have a lingering worry about an attack unrelated to factoring. Rabin's method is susceptible to a chosen message attack, however, in which the attacker tricks the user into signing messages of a special form. Another signature scheme, by Fiat and Shamir, is based on interactive zero- knowledge protocols, but can be adapted for signatures. It is faster than RSA and is provably equivalent to factoring, but the signatures are much larger than RSA signatures. Other variations, however, lessen the necessary signature length; see for references. A system is "equivalent to factoring" if recovering the private key is provably as hard as factoring; forgery may be easier than factoring in some of the systems.
    • Advantages of RSA over other public-key cryptosystems include the fact that it can be used for both encryption and authentication, and that it has been around for many years and has successfully withstood much scrutiny. Page No10 2. System architecture 2.1 Modules 2.1.1 Main module Execution of program in c++ always starts from main () function. There is always one main () in every c++ program. It may return a value or not. START Accept values of p &q n=p*q et=(p-1)*(q-1) Repeat for i=2 to i<et If (ex_gcd(et,I,1)==1) Accept ei Display
    • N Y NNN Y N Page no 11 A A pt Initialize temp=ex_gcd (et,e,2) d=et+temp Display “Public key=” (e, n) “Private Key=” (d, n) Accept pt Initialize ct=en_de (et, d, n) STOP
    • Display “Ciphrtext=”et “Plaintext after decryption=”en_de (et, d, n) Page No 12 2.1.2 Module int ex_gcd (int a, int b, Int n) This module is used to find greatest common divisor of two numbers. This module computes GCD using Extended Euclids method. The Euclidean algorithm is based on the following theorem: For any nonnegative integer a and any positive integer b, gcd (a,b)=gcd(b,a mod b) int ex_gcd (int a, int b, Int n) Initialize x=0, y=1, lastx =1, lasty =0 N While b!=0
    • Y B temp=b a=a/b b=a%b a=temp A P age No: 13 A temp=x x=lastx-q*x lastx=temp temp=y y=lasty-q*y lasty=temp B If Return lasty N Y n==1
    • Y Return a RETURN Page No 14 2.1.3 Module long en_de (int base, int exp, int n) This module is used to calculate (a^b mod n) long en_de (int base, int exp, int n) Initialize c=0, long d=0 Repeat for i=0 to exp!=0 N Y b[i]=exp%2 i-- c=2*c Z X Repeat for d= (d*d) %n
    • N Y Page No 15 Z X Y If N b[i]==1 c=c+1 d=(d*base)%n Return d RETURN
    • Page No16 2.2 Basic block diagram Key Generator Public Key Private Key Encrypted msg Message Message Encrypt Decrypt
    • Page No 17 2.3 Output screen
    • Page No 18
    • Page No 19
    • Page No 2 3. Hardware and software requirements
    • 3.1 Hardware Hardware is nothing but the physical parts of the computer which can be seen and counted. Microcomputer hardware consists of the system unit, I/O devices, secondary storage, and communications devices. The physical equipments fall into this category. System unit: The system unit, also known as the system cabinet or chasis, is a container that houses most of the electronic components that make up a computer system. Two important components of the system unit are the microprocessor and memory. The microprocessor controls and manipulates data to produce information. Memory, also known as primary storage or RAM, holds data and program instructions for processing the data. Input/output devices: Input devices translate data and programs that human can understand into a form that the computer can process. Output devices translate the processed information from the computer into a form that humans can understand. Secondary storage devices: Unlike memory, secondary storage devices hold data and programs even after electrical power to the computer has been turned off. 3.2 Hardware requirements • RAM • Hard disc drive • Hard Disk • Keyboard • Mouse
    • Page No 21 3.3 Software Software is a collection of programs. Programs are nothing but the collection of instructions. Software is of two kinds: system software and application software. System software: The user interacts primarily with the application software. System software enables the application software to interact with the computer hardware. System software is “background” software that helps the computer manage its own internal resources. The most important System software program is the operating system, which interacts with the application software and the computer. Examples: Operating system, Compiler, Interpreter, Assembler Application software: Application software might be described as “end-user” software. These programs are designed to address general-purpose and special-purpose applications. Examples: Microsoft Office, Java, C, C++, etc... 3.4 Software requirements  Operating system: Windows 98 or above The most important System software program is the operating system, which interacts with the application software and the computer.  Turbo c++ It might be described as “end-user” software.
    • Page No 22 4. Scope and future enhancement 4.1 Length of key in RSA The best size for an RSA modulus depends on one's security needs. The larger the modulus, the greater the security, but also the slower the RSA operations. One should choose a modulus length upon consideration, first, of one's security needs, such as the value of the protected data and how long it needs to be protected, and, second, of how powerful one's potential enemies are. Odlyzko's paper considers the security of RSA key sizes based on factoring techniques available in 1995 and the ability to tap large computational resources via computer networks. A specific assessment of the security of 512-bit RSA keys shows that one may be factored for less than $1,000,000 in cost and eight months of effort in 1997. It is believed that 512-bit keys no longer provide sufficient security with the advent of new factoring algorithms and distributed computing. Such keys should not be used after 1997 or 1998. Recommended key sizes are now 768 bits for personal use, 1024 bits for corporate use, and 2048 bits for extremely valuable keys like the key pair of a certifying authority. A 768-bit key is expected to be secure until at least the year 2004. The key of an individual user may expire after a certain time, say, two years. This gives an opportunity to change keys regularly and thus maintain a given level of security. Upon expiration, the user should generate a new key which is at least a few digits longer than the old key to reflect the speed increases of computers and factoring algorithms over the two years. Recommended key length schedules are published by RSA Laboratories on a regular basis. Users should keep in mind that the estimated times to break RSA are averages only. A large factoring effort, attacking many thousands of RSA moduli, may succeed in
    • factoring at least one in a reasonable time. Although the security of any individual key is still strong, with some factoring methods there is always a small chance that the attacker may get lucky and factor some key quickly. Page No 23 As for the slowdown caused by increasing the key size, doubling the modulus length will, on average, increase the time required for public-key operations (encryption and signature verification) by a factor of four, and increase the time taken by private-key operations (decryption and signing) by a factor of eight. (This assumes typical methods for RSA implementation, not "fast multiplication.") The reason that public-key operations are affected less than private-key operations is that the public exponent can remain fixed when the modulus is increased, whereas the private exponent increases proportionally. Key generation time would increase by a factor of 16 upon doubling the modulus, but this is a relatively infrequent operation for most users. (The impact of key size increases other than doubling can be calculated similarly.) 4.1.1 Factoring numbers It depends on the size of the numbers, and their form. Numbers in special forms, such as a^n - b for `small' b, are more readily factored through specialized techniques and not necessarily related to the difficulty of factoring in general. Hence a specific factoring `breakthrough' for a special number form may have no practical value or relevance to particular instances (and those generated for use in cryptographic systems are specifically `filtered' to resist such approaches.) The most important observation about factoring is that all known algorithms require an exponential amount of time in the size_ of the number (measured in bits, log2 (n) where `n' is the number). Cryptographic algorithms built on the difficulty of factoring generally depend on this exponential-time property. (The distinction of `exponential' vs. `polynomial time' algorithms, or NP vs. P, is a major area of active computational research, with insights very closely intertwined with cryptographic security.)
    • In October 1992 Arjen Lenstra and Dan Bernstein factored 2^523 1into primes, using about three weeks of MasPar time. (The MasPar is a 16384-processor SIMD machine; each processor can add about 200000 integers per second.) The algorithm there is called the ``number field sieve''; it is quite a bit faster for special numbers like 2^523 - 1 than for general numbers n, but it takes time only exp (O (log^ {1/3} n log^ {2/3} log n)) in any case. Page No 24 An older and more popular method for smaller numbers is the ``multiple polynomial quadratic sieve'', which takes time exp (O (log^ {1/2} n log^ {1/2} log n)) ---faster than the number field sieve for small n, but slower for large n. The breakeven point is somewhere between 100 and 150 digits, depending on the implementations. Factorization is a fast-moving field---the state of the art just a few years ago was nowhere near as good as it is now. If no new methods are developed, then 2048-bit RSA keys will always be safe from factorization, but one can't predict the future. (Before the number field sieve was found, many people conjectured that the quadratic sieve was asymptotically as fast as any factoring method could be.) 4.1.2 Length of Primes The two primes, p and q, which compose the modulus, should be of roughly equal length; this will make the modulus harder to factor than if one of the primes was very small. Thus if one chooses to use a 768-bit modulus, the primes should each have length approximately 384 bits. If the two primes are extremely close (identical except for, say, 100 - 200 bits), there is a potential security risk, but the probability that two randomly chosen primes are so close is negligible.
    • 4.1.3 Distinct Primes for RSA There are enough prime numbers that RSA users will never run out of them. The Prime Number Theorem states that the number of primes less than or equal to n is asymptotically n/log n. This means that the number of prime numbers of length 512 bits or less is about 10150, which is a number greater than the number of atoms in the known universe. 4.1.4 Checking for a Prime Number It is generally recommended to use probabilistic primality testing, which is much quicker than actually proving that a number is prime. One can use a probabilistic test that determines whether a number is prime with arbitrarily small probability of error, say, less than 2-100. Page No 25 4.2 Speed of RSA An "RSA operation," whether for encrypting or decrypting, signing or verifying, is essentially a modular exponentiation, which can be performed by a series of modular multiplications. In practical applications, it is common to choose a small public exponent for the public key; in fact, entire groups of users can use the same public exponent, each with a different modulus. (There are some restrictions on the prime factors of the modulus when the public exponent is fixed.) This makes encryption faster than decryption and verification faster than signing. With typical modular exponentiation algorithms, public-key operations take O (k2) steps, private-key operations take O( k3) steps, and key generation takes O(k4) steps, where k is the number of bits in the modulus. ( O-notation refers to the upper bound on the asymptotic running time of an algorithm.) "Fast multiplication" techniques, such as
    • FFT-based methods, require asymptotically fewer steps, though in practice they are not as common due to their great software complexity and the fact that they may actually be slower for typical key sizes. There are many commercially available software and hardware implementations of RSA, and there are frequent announcements of newer and faster chips. On a 90 MHz Pentium, RSA Data Security's cryptographic toolkit BSAFE 3.0 has a throughput for private-key operations of 21.6 Kbits per second with a 512-bit modulus and 7.4 Kbits per second with a 1024-bit modulus. The fastest RSA hardware has a throughput greater than 300 Kbits per second with a 512-bit modulus, implying that it performs over 500 RSA private-key operations per second. (There is room in that hardware to execute two RSA 512-bit RSA operations in parallel, hence the 600 Kbits/s speed reported in. For 970-bit keys, the throughput is 185 Kbits/s.) It is expected that RSA speeds will reach 1 Mbits/second within a year or so. By comparison, DES is much faster than RSA. In software, DES is generally at least 100 times as fast as RSA. In hardware, DES is between 1,000 and 10,000 times as fast, depending on the implementation. Implementations of RSA will probably narrow the gap a bit in coming years, as there are growing commercial markets, but DES will get faster as well. Page No 26 4.3 Security of RSA Nobody knows. An obvious attack on RSA is to factor pq into p and q. See below for comments on how fast state-of-the-art factorization algorithms run. Unfortunately nobody has the slightest idea how to prove that factorization---or any realistic problem at all, for that matter---is inherently slow. It is easy to formalize what we mean by ``RSA is/isn't strong''; but, as Hendrik W. Lenstra, Jr., says, ``Exact definitions appear to be necessary only when one wishes to prove that algorithms with certain properties do _not_ exist, and theoretical computer science is notoriously lacking in such negative results.'' Note that there may even be a `shortcut' to breaking RSA other than factoring. It is obviously sufficient but so far not provably necessary. That is, the security of the system depends on two critical assumptions: (1) factoring is required to break the system, and (2) factoring is `inherently computationally intractable', or, alternatively, `factoring is hard' and `any approach that can be used to break the system is at least as hard as factoring'.
    • Historically even professional cryptographers have made mistakes in estimating and depending on the intractability of various computational problems for secure cryptographic properties. For example, a system called a `Knapsack cipher' was in vogue in the literature for years until it was demonstrated that the instances typically generated could be efficiently broken, and the whole area of research fell out of favor. Strong Primes in RSA In the literature pertaining to RSA, it has often been suggested that in choosing a key pair, one should use so-called "strong" primes p and q to generate the modulus n. Strong primes are those with certain properties that make the product n hard to factor by specific factoring methods; such properties have included, for example, the existence of a large prime factor of p-1 and a large prime factor of p+1. The reason for these concerns is that some factoring methods are especially suited to primes p such that p -1 or p+1 has only small factors; strong primes are resistant to these attacks. However, advances in factoring over the last ten years appear to have obviated the advantage of strong primes; the elliptic curve factoring algorithm is one such advance. The new factoring methods have as good a chance of success on strong primes as on "weak" primes. Therefore, choosing traditional "strong" primes alone does not significantly increase security. Choosing large enough primes is what matters. However, there is no danger in using strong, large primes, though it may take slightly longer to generate a strong prime than an arbitrary prime. It is possible that new factoring algorithms may be developed in the future which once again target primes with certain properties; if so, choosing strong primes may once again help to increase security. Page No 27 5. Advantages and limitations 5.1 Advantages ClassicSys as a standard...
    • Besides ClassicSys ciphering at high speed, two more advantages make ClassicSys prime candidate for THE standard application in cryptography: 1. ClassicSys uses only 1 secret key to meet ALL the cryptographic needs of an end- user such as: To authenticate himself To authenticate messages with a time reference To generate all the Session Keys he needs for Email (as one possible application) To generate several keys for other applications: banking, electronic commerce, electronic voting, casino games at home... 2. ClassicSys is designed in such a way that there is no valid reason to forbid it's use in any country in the world. ClassicSys gives all the required guarantees to its users and their government: secret keys must not be divulged and Security Services can always decipher suspect messages.
    • Page No 28 5.1.1 Advantages & benefits for the End-User ClassicSys offers more than the known advantages of encryption solutions: 1) Very high speed of encryption (see below). 2) The chip contains the SED algorithm and all the other features of ClassicSys. One system covers all cryptographic needs, for all applications. 3) New applications can be added without updating the chip. ClassicSys works is fully automated, requests to the TA are returned directly, without human intervention. 4) Private Keys are completely unknown to everybody, even the Trust Authority's manager! All keys are written into chips and are not accessible to humans or other machines. This guarantees the privacy of all the end-users. 5) Once an end-user has received the information to generate his Application Keys, he does not need the intervention of the TA anymore.
    • Email for example, users do not need the TA to exchange messages between themselves. 6) ClassicSys acts like a public key cryptosystem: every end-user has one public ID number, which is used in a similar way to public keys. Email for example, when somebody wants to communicate with another end- user, he sends to the TA his ID number and the one from his correspondent. In return he receives information from the TA to generate their Session Keys. Page No 29 5.1.2 Advantages & benefits for the Authority ClassicSys enables the TA and National Security Service (NSS) to act completely separately, under different authorities, as required by our Democracies. Requests from the NSS to the TA are recorded encrypted by the TA (TA doesn't know the ID of Alice or Bob in a suspect message). This guarantees the confidentiality of the NSS's investigation; however, the recorded provides an audit trail for any Competent Investigating Authority. Optimum ClassicSys operation should have the TA and NSS under different authorities, but every country can implement it as seen fit. ClassicSys enables the NSS to decrypt the content of suspect incoming and outgoing international messages, without the necessity for users to deposit their private secret keys in the corresponding countries (as with the RSA). Only the NSS is able to request necessary information to the TA to investigate suspect messages. Each country remains independent regarding the deciphering of the incoming and outgoing messages: each message contains the necessary information to be deciphered by the 2 National Security Services. Each Trust Authority has its own Private Key. Consequently they can only compute Private Keys for domestic users.
    • 5.1.3 Technical advantages & benefits ClassicSys is easy to implement in integrated circuits because:  it uses only XOR and branching functions  no reporting arithmetic bits are needed  Programming can be done with a polynomial structure.  The length of the blocks of key and data are identical and equal to 128 bits (16 bytes). Page No 30 Security of ClassicSys is enhanced compared to other systems because:  deciphering is not the reverse of ciphering  the ciphering and deciphering keys are different  all the Private Keys (end-users, TAs, NSSs) are included in an IC and therefore not accessible. There is no known way to reconstruct, by cryptanalysis, the secret key, knowing a clear and its corresponding encrypted message. Differential cryptanalysis is not suitable to the SED algorithm. On average, there is only one key corresponding to a clear and its associated encrypted text and therefore, each bit of the key has equal weight in the algorithm. Only 1 secret key of 128 bits is enough to meet all the cryptographic needs of an end-user such as:  to generate all the Session Keys he needs  to authenticate himself  to authenticate messages with a time reference  to generate several keys for other applications (banking, electronic commerce, electronic voting, casino games at home, ...) Unlike the RSA algorithm, where every key requires a determined space, the SED algorithm can use every block contained in the space 2128. The SED algorithm is very fast for the following reasons:
    •  the length of the blocks (key and data) is small (128 bits against more than 512 bits) but long enough to disable every exhaustive cryptanalysis.  on average, it is possible to compute at 1/3 of the clock frequency (8 to 10 Mbytes/sec). The SED algorithm is completely transparent. Due to the theory of Multiplicative Groups we can confirm that there is no Trojan horse in the SED algorithm. The SED algorithm permits chained mode ciphering, allowing reduction of the authentication information to one block of 128 bits, whatever the length of the data to authenticate. Page No 31 5.2 Limitations • Brute force attack: This involves trying all possible private keys. • Mathematical attack: These are several approaches, all equivalent in effort to factoring the product of two primes. The larger the size of the key, the slower the system will run. • The factoring Problem: We can identify three approaches to attacking RSA mathematically: Factor n into its two prime factors. This enables calculation of phi (n) Phi (n) = (p-1)*(q-1), which, in turn, enables determination of d. d=e^ (-1)*(mod (phi (n))).
    • Determination d directly, without first determining phi (n). • Chosen ciphertext attacks: This type of attack exploits properties of the RSA algorithm. • Timing Attack: If one needed yet another lesson about how difficult it is to assess the security of a cryptographic algorithm, the appearance of timing attacks provides a stunning one. A timing attack is somewhat analogous to a burglar guessing the combination of a safe by observing how long it takes to turn the dial from number to number. Page No 32 6. Conclusion Uses product factorization method for cryptanalysis consuming a long time. It is the most popular and most efficiently used algorithm due to its stability and reliability. RSA is one of the standard algorithms that are used in most of the encryptions and decryption tasks. Hence it concludes that RSA is one of the BEST and RELIABLE algorithms to use.The use of RSA are practically ubiquitous today. It is currently used in a wide variety of products, platforms, and industries around the world. It is found in many commercial software products and is planned to be in many more. It is built into current operating systems by Microsoft, Apple, Sun, and Novell. In hardware, RSA can be found in secure telephones, on Ethernet network cards, and on smart
    • cards. In addition, RSA is incorporated into all of the major protocols for secure Internet communications, including SSL, S-HTTP, SEPP, S/MIME, S/WAN, STT and PCT. It is also used internally in many institutions, including branches of the U.S. government, major corporations, national laboratories, and universities. 6.1 RSA as an Official Standard Today RSA is part of many official standards worldwide. The ISO (International Standards Organization) 9796 standard lists RSA as a compatible cryptographic algorithm, as does the ITU-T X.509 security standard. RSA is part of the Society for Worldwide Interbank Financial Telecommunications (SWIFT) standard, the French financial industry's ETEBAC 5 standard, and the ANSI X9.31 draft standard for the U.S. banking industry. The Australian key management standard, AS2805.6.5.3, also specifies RSA.RSA is found in Internet standards and proposed protocols including PEM (Privacy Enhanced Mail), S/MIME, PEM-MIME, S-HTTP and SSL as well as the PKCS standard for the software industry. The OSI Implementors' Workshop (OIW) has issued implementors' agreements referring to PKCS and PEM, each of which includes RSA.A number of other standards are currently being developed and will be announced over the next couple of years; many are expected to include RSA as either an endorsed or a recommended system for privacy and/or authentication. Page No 33 6.2 RSA as a De Facto Standard RSA is the most widely used public-key cryptosystem today and has often been called a de facto standard. Regardless of the official standards, the existence of a de facto standard is extremely important for the development of a digital economy. If one public-key system is used everywhere for authentication, then signed digital documents can be exchanged between users in different nations using different software on different
    • platforms; this interoperability is necessary for a true digital economy to develop. Adoption of RSA has grown to the extent that standards are being written to accommodate RSA. When the U.S. financial industry was developing standards for digital signatures, it first developed ANSI X9.30 to support the federal requirement of using the Digital Signature Standard. They then modified X9.30 to X9.31 with the emphasis on RSA digital signatures to support the de facto standard of financial institutions. The lack of secure authentication has been a major obstacle in achieving the promise that computers would replace paper; paper is still necessary almost everywhere for contracts, checks, official letters, legal documents, and identification. With this core of necessary paper transaction, it has not been feasible to evolve completely into a society based on electronic transactions. Digital signatures are the exact tool necessary to convert the most essential paper-based documents to digital electronic media. Digital signatures make it possible, for example, to have leases, wills, passports, college transcripts, checks, and voter registration forms that exist only in electronic form; any paper version would just be a "copy" of the electronic original. All of this is enabled by an accepted standard for digital signatures. Page No 34 6.3 RSA Patented
    • RSA is patented under U.S. Patent 4,405,829, issued September 20, 1983 and held by RSA Data Security, Inc. of Redwood City, California; the patent expires 17 years after issue, in 2000. RSA Data Security has a standard, royalty-based licensing policy, which can be modified for special circumstances. The U.S. government can use RSA without a license because it was invented at MIT with partial government funding. In the U.S., a license is needed to "make, use or sell" RSA. However, RSA Data Security usually allows free non-commercial use of RSA, with written permission, for academic or university research purposes. Furthermore, RSA Laboratories has made available (in the U.S. and Canada) at no charge a collection of cryptographic routines in source code, including the RSA algorithm; it can be used, improved and redistributed non-commercially. 6.4 RSA Exported from the United States Export of RSA falls under the same U.S. laws as all other cryptographic products. RSA used for authentication is more easily exported than when it is used for privacy. In the former case, export is allowed regardless of key (modulus) size, although the exporter must demonstrate that the product cannot be easily converted to use for encryption. In the case of RSA used for privacy (encryption), the U.S. government generally does not allow export if the key size exceeds 512 bits. Export policy is currently a subject of debate, and the export status of RSA may well change in the next year or two. For example, a Commerce Jurisdiction (basically a general export license per Department of Commerce rather than Department of State approval) has been obtained by Cybercash for 768-bit RSA for financial transactions. Regardless of U.S. export policy, RSA is available abroad in non-U.S. products. Page No 35
    • 6.5 Comparison between the DES, the RSA and the SED The table below compares the important features of the DES, the RSA and the SED algorithms, used within global cryptographic systems. 1) Speed of RSA is low whereas that of DES and SED is high. 2) Deposit of keys is not needed in SED but it required in both RSA and DES. 3) Data block length in RSA is of minimum 512 bits, whereas in case of DES and SED it respectively 64 bits and 128 bits. 4) Key length in RSA is minimum 512 bits whereas in case of DES and SED these are respectively 56 bits and 128 bits. 5) Use of data space in RSA is variable, not defined, limited and in case of DES and SED it is full, 64 bits and 128 bits. 6) Ciphering & deciphering key DES is same and in RSA and SED it is different. 7) Ciphering & deciphering algorithm RSA is same and in SED and DES it is different.