An Unobservable Secure On-Demand Routing With D-Worm Detection In MANETIJRES Journal
Mobile ad-hoc network (MANET) is a self-configuring infrastructure-less network of mobile devices. In wireless communication, the privacy-protection of mobile ad hoc networks is more demanding than that of wired networks due to the open nature and mobility. Many schemes have been proposed to protect privacy in ad hoc networks. In these schemes, data packets and control packets are not completely unlikable and unobservable, and it will be distinguishable by harm users. An unobservable secure on-demand routing (USOR) protocol provides complete unlinkability and content unobservability for all type of packets using group signature and ID-based encryption. USOR Protocol did not see any worm in the content particularly disguising worm. The proposed scheme is detecting disguised worm using spectrum based scheme. This scheme uses power spectral density and spectral flatness measure. Spectrum based method not only detect disguising worm, but traditional worms as well.
This document presents research on compressing encrypted data. The researchers investigate reversing the traditional order of compressing data before encrypting it. They show that by using principles of coding with side information, it is possible to first encrypt data and then compress it without loss of optimal compression efficiency or security. They prove the theoretical feasibility of this approach and describe a system to implement compression of encrypted data. Computer simulations demonstrate the performance of the proposed system. The researchers identify connections to distributed source coding theory and demonstrate that in some scenarios, reversing the order of encryption and compression does not compromise effectiveness or security.
ENHANCED SECURE ALGORITHM FOR MESSAGE COMMUNICATIONIJNSA Journal
This paper puts forward a safe mechanism of data transmission to tackle the security problem of information which is transmitted in Internet. The encryption standards such as DES (Data Encryption Standard), AES (Advanced Encryption Standard) and EES (Escrowed Encryption Standard) are widely used to solve the problem of communication over an insecure channel. With advanced technologies in computer hardware and software, these standards seem not to be as secure and fast as one would like. In
this paper we propose a encryption technique which provides security to both the message and the secret key achieving confidentiality and authentication. The Symmetric algorithm used has two advantages over traditional schemes. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the security level is higher due to the inherent poly-alphabetic nature of the substitution mapping method used here, together with the translation and transposition operations performed in the algorithm. Asymmetric algorithm RSA is worldwide known for its high security. In this paper a detailed report of the process is presented and analysis is done comparing our proposed technique with familiar techniques
This document provides an overview of information security and cryptography. It discusses objectives of security like avoiding data threats. It also covers topics like password auditing, data security, authentication, encryption, decryption, public and private key cryptography, digital signatures, and the RSA algorithm. It demonstrates an example of encrypting a message using RSA and decrypting the cipher text. The conclusion emphasizes the importance of information security.
This document provides an overview of cryptography. It defines cryptography as the science of securing messages from attacks. It discusses basic cryptography terms like plain text, cipher text, encryption, decryption, and keys. It describes symmetric key cryptography, where the same key is used for encryption and decryption, and asymmetric key cryptography, which uses different public and private keys. It also covers traditional cipher techniques like substitution and transposition ciphers. The document concludes by listing some applications of cryptography like e-commerce, secure data, and access control.
Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...IDES Editor
In this paper, the security of Selective Encryptionin
RSA based Singular Cubic Curve with Automatic Variable Key
(AVK) for some well known attacks are analysed. It is proved
that this cryptosystem is more secure than Koyama scheme
from which the algorithm has been generated. The proposed
cryptographic algorithm makes justified use of Koyama
Schemes. Koyama scheme is not semantically secure. The
proposed Scheme is efficient and semantically secure public
key cryptosystem based on Singular Cubic Curve with AVK.
Further, the partially known attacks, linearly related plain text
attacks, isomorphism attacks, low exponent attacks, Wiener’s
attack and Hastad’s attack are analyzed for effect with the
proposed scheme. The Selective Encryption in RSA based
Singular Cubic Curve with AVK for text based documents is
found to be robust enough to encounter all these attacks.
This document proposes a new message authentication scheme for wireless sensor networks using modified El-Gamal signature on elliptic curves. It aims to overcome the problems of existing symmetric-key and public-key based authentication schemes, such as key management overhead, threshold problems, and lack of scalability. The proposed scheme provides message authentication between sensor nodes in a hop-by-hop manner and is resilient to node compromise attacks. It uses efficient elliptic curve cryptography to generate digital signatures for authenticating messages sent between nodes with less computational overhead compared to existing public-key schemes. The document describes the detailed algorithms and advantages of the proposed source anonymous message authentication approach based on modified El-Gamal signature on elliptic curves.
Provably Secure Three Party Authenticated Quantum Key Distribution ProtocolsAvinash Varma Kalidindi
This work presents quantum key distribution protocols (QKDPs) to safeguard security in large networks, using in new directions in classical cryptography and quantum cryptography. Two three-party QKDPs, one with implicit user authentication and the other with explicit mutual authentication, are proposed to demonstrate the merits of the new combination, which include the following:
1) Security against such attacks as man-in-the-middle, eavesdropping and replay.
2) Efficiency is improved as the proposed protocols contain the fewest number of communication rounds among existing QKDPs.
3) Two parties can share and use a long-term secret (repeatedly). To prove the security of the proposed schemes, this work also presents a new primitive called the Unbiased-Chosen Basis (UCB) assumption.
Project Execution: https://vimeo.com/AvinashVarma/provably
An Unobservable Secure On-Demand Routing With D-Worm Detection In MANETIJRES Journal
Mobile ad-hoc network (MANET) is a self-configuring infrastructure-less network of mobile devices. In wireless communication, the privacy-protection of mobile ad hoc networks is more demanding than that of wired networks due to the open nature and mobility. Many schemes have been proposed to protect privacy in ad hoc networks. In these schemes, data packets and control packets are not completely unlikable and unobservable, and it will be distinguishable by harm users. An unobservable secure on-demand routing (USOR) protocol provides complete unlinkability and content unobservability for all type of packets using group signature and ID-based encryption. USOR Protocol did not see any worm in the content particularly disguising worm. The proposed scheme is detecting disguised worm using spectrum based scheme. This scheme uses power spectral density and spectral flatness measure. Spectrum based method not only detect disguising worm, but traditional worms as well.
This document presents research on compressing encrypted data. The researchers investigate reversing the traditional order of compressing data before encrypting it. They show that by using principles of coding with side information, it is possible to first encrypt data and then compress it without loss of optimal compression efficiency or security. They prove the theoretical feasibility of this approach and describe a system to implement compression of encrypted data. Computer simulations demonstrate the performance of the proposed system. The researchers identify connections to distributed source coding theory and demonstrate that in some scenarios, reversing the order of encryption and compression does not compromise effectiveness or security.
ENHANCED SECURE ALGORITHM FOR MESSAGE COMMUNICATIONIJNSA Journal
This paper puts forward a safe mechanism of data transmission to tackle the security problem of information which is transmitted in Internet. The encryption standards such as DES (Data Encryption Standard), AES (Advanced Encryption Standard) and EES (Escrowed Encryption Standard) are widely used to solve the problem of communication over an insecure channel. With advanced technologies in computer hardware and software, these standards seem not to be as secure and fast as one would like. In
this paper we propose a encryption technique which provides security to both the message and the secret key achieving confidentiality and authentication. The Symmetric algorithm used has two advantages over traditional schemes. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the security level is higher due to the inherent poly-alphabetic nature of the substitution mapping method used here, together with the translation and transposition operations performed in the algorithm. Asymmetric algorithm RSA is worldwide known for its high security. In this paper a detailed report of the process is presented and analysis is done comparing our proposed technique with familiar techniques
This document provides an overview of information security and cryptography. It discusses objectives of security like avoiding data threats. It also covers topics like password auditing, data security, authentication, encryption, decryption, public and private key cryptography, digital signatures, and the RSA algorithm. It demonstrates an example of encrypting a message using RSA and decrypting the cipher text. The conclusion emphasizes the importance of information security.
This document provides an overview of cryptography. It defines cryptography as the science of securing messages from attacks. It discusses basic cryptography terms like plain text, cipher text, encryption, decryption, and keys. It describes symmetric key cryptography, where the same key is used for encryption and decryption, and asymmetric key cryptography, which uses different public and private keys. It also covers traditional cipher techniques like substitution and transposition ciphers. The document concludes by listing some applications of cryptography like e-commerce, secure data, and access control.
Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...IDES Editor
In this paper, the security of Selective Encryptionin
RSA based Singular Cubic Curve with Automatic Variable Key
(AVK) for some well known attacks are analysed. It is proved
that this cryptosystem is more secure than Koyama scheme
from which the algorithm has been generated. The proposed
cryptographic algorithm makes justified use of Koyama
Schemes. Koyama scheme is not semantically secure. The
proposed Scheme is efficient and semantically secure public
key cryptosystem based on Singular Cubic Curve with AVK.
Further, the partially known attacks, linearly related plain text
attacks, isomorphism attacks, low exponent attacks, Wiener’s
attack and Hastad’s attack are analyzed for effect with the
proposed scheme. The Selective Encryption in RSA based
Singular Cubic Curve with AVK for text based documents is
found to be robust enough to encounter all these attacks.
This document proposes a new message authentication scheme for wireless sensor networks using modified El-Gamal signature on elliptic curves. It aims to overcome the problems of existing symmetric-key and public-key based authentication schemes, such as key management overhead, threshold problems, and lack of scalability. The proposed scheme provides message authentication between sensor nodes in a hop-by-hop manner and is resilient to node compromise attacks. It uses efficient elliptic curve cryptography to generate digital signatures for authenticating messages sent between nodes with less computational overhead compared to existing public-key schemes. The document describes the detailed algorithms and advantages of the proposed source anonymous message authentication approach based on modified El-Gamal signature on elliptic curves.
Provably Secure Three Party Authenticated Quantum Key Distribution ProtocolsAvinash Varma Kalidindi
This work presents quantum key distribution protocols (QKDPs) to safeguard security in large networks, using in new directions in classical cryptography and quantum cryptography. Two three-party QKDPs, one with implicit user authentication and the other with explicit mutual authentication, are proposed to demonstrate the merits of the new combination, which include the following:
1) Security against such attacks as man-in-the-middle, eavesdropping and replay.
2) Efficiency is improved as the proposed protocols contain the fewest number of communication rounds among existing QKDPs.
3) Two parties can share and use a long-term secret (repeatedly). To prove the security of the proposed schemes, this work also presents a new primitive called the Unbiased-Chosen Basis (UCB) assumption.
Project Execution: https://vimeo.com/AvinashVarma/provably
a performance analysis of generalized key scheme block cipher (gksbc) algorit...INFOGAIN PUBLICATION
Information is a commodity. Information has economic value and production of it incurs cost. Securing the information is posing a considerable challenge. The cryptographic technology plays a leading role in securing the owners right on produced information. A continuous development of new encryption systems are necessitated with the advancement in security and efficiency needs. Cryptanalytic studies have demonstrated the superior capability of recently developed Generalized Key Scheme Block Cipher (GKSBC) algorithm in terms of stability, execution time and encryption quality compared to standard security algorithms. This paper proposes to evaluate the enduring capacity of GKSBC to various cryptanalytic attacks viz., Brute – Force Attack, Differential Cryptanalysis, Integral Cryptanalysis, Linear Cryptanalysis and Rectangle attack. None of the traditional attacks are designed to decrypt GKSBC encryption as the use of key scheme is different in it and therefore robust to the conventional cryptanalytic attacks.
Introduction to and survey of TLS SecurityAaron Zauner
This document provides an introduction and survey of TLS security. It begins with an overview of motivation and background topics like information security, cryptography, and TLS. It then discusses TLS in more detail, including TLS records, the TLS handshake process, and cipher suites that combine cryptographic techniques. The document aims to cover the necessary basics to understand TLS security while recommending additional resources for deeper learning.
Selective jamming attack prevention based on packet hiding methods and wormholesIJNSA Journal
Because of the widespread use of wireless sensor ne
tworks in many applications, and due to the nature
of
the specifications of these networks (WSN) in terms
of wireless communication, the network contract
specifications, and published it in difficult envir
onments. All this leads to the network exposure to
many
types of external attacks. Therefore, the protectio
n of these networks from external attacks is consid
ered the
one of the most important researches at this time.
In this paper we investigated the security in wirel
ess
sensor networks, Limitations of WSN, Characteristic
Values for some types of attacks, and have been
providing protection mechanism capable of detecting
and protecting wireless sensor networks from a wid
e
range of attacks
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
An enhanced ip traceback mechanism for tracking the attack source using packe...IAEME Publication
The document discusses an enhanced IP traceback mechanism (EITM) to more efficiently trace the source of distributed denial of service (DDoS) attacks. EITM aims to reduce the number of packets required for traceback by improving existing linear and remainder packet marking schemes. It analyzes challenges in tracing attackers due to the stateless nature of the internet and proposes that an effective traceback scheme minimizes required packets. The main goal is a mechanism that needs a number of packets almost equal to the number of hops to reconstruct the attack path more efficiently.
Cryptography is the science of securing messages through encryption and decryption techniques to ensure confidentiality, integrity, and authentication. There are two main categories of cryptography - symmetric key cryptography where the same key is used by the sender and receiver, and asymmetric key cryptography where different public and private keys are used. Common techniques include substitution ciphers which replace letters with other letters or symbols, and transposition ciphers which rearrange the positions of letters in a message. The keys and algorithms used aim to protect data from unauthorized parties.
Cryptography is used to protect information by encrypting messages into an unreadable cipher text format. Modern cryptography uses either secret key cryptography, which uses a single key for encryption and decryption, or public key cryptography, which uses two mathematically related keys with one key to encrypt and the other to decrypt. The purpose of cryptography and security is to defend against hackers, industrial espionage, and to secure e-commerce, bank accounts, intellectual property, and avoid liability.
The document proposes two new autonomous system (AS) traceback techniques to identify the AS of the attacker launching a denial-of-service (DoS) attack. The first technique, called Prevent Overwriting AS Traceback (POAST), marks packets with a dynamic probability and protects marked packets from being overwritten. It encodes the attacking AS number instead of router IP addresses. The second technique, called Efficient AS Traceback (EAST), is also described but not in detail. Both are evaluated to have better performance than existing probabilistic packet marking techniques for traceback by reducing the number of packets and routers required.
A precise termination condition of the probabilistic packet marking algorithm...Mumbai Academisc
This document summarizes a research project that proposes a precise termination condition for the probabilistic packet marking (PPM) algorithm. The PPM algorithm allows routers to encode path information onto packets during a denial of service (DoS) attack, enabling the victim to reconstruct the attack graph. However, the existing PPM algorithm lacks a well-defined termination condition, and cannot handle multiple attackers. The proposed project aims to define a termination condition to ensure the reconstructed graph accurately represents the actual attack paths. It also extends the algorithm to support tracing packets from multiple attackers.
The document discusses network security and begins by noting that the slides can be freely used and modified if their source is mentioned. It then provides an overview of the goals and roadmap for Chapter 8, which covers principles of cryptography, message integrity, securing various network layers, firewalls, and intrusion detection systems. The chapter aims to explain the fundamentals of network security and how security is implemented in practice.
A Survey on Generation and Evolution of Various Cryptographic TechniquesIRJET Journal
This document summarizes previous research that has surveyed and compared various symmetric key cryptographic techniques. Several studies analyzed the performance of algorithms like DES, 3DES, AES, Blowfish, RC4 in terms of encryption/decryption time, memory usage, power consumption, throughput, and security against attacks. Most found that Blowfish had among the best performance overall, being fast and requiring few resources while maintaining strong security. AES generally required more processing power and time than alternatives like DES or RC4. The performance of algorithms could also vary based on file/data type, size, and the computing platform or operating system used.
1. The document discusses algorithm-substitution attacks (ASA) on encrypted protocols, where a government agency substitutes a user's encryption algorithm with its own algorithm to decrypt messages without the user's knowledge.
2. It analyzes how different types of symmetric encryption schemes are vulnerable or protected against ASA. Deterministic and stateful schemes provide more protection against ASA than randomized and stateless schemes.
3. The paper proposes that using deterministic, stateful symmetric encryption schemes can help achieve security against ASA by ensuring real and subverted ciphertexts are indistinguishable to attackers.
METHODS TOWARD ENHANCING RSA ALGORITHM : A SURVEYIJNSA Journal
Cryptography defines different methods and technologies used in ensuring communication between two parties over any communication medium is secure, especially in presence of a third part. This is achieved through the use of several methods, such as encryption, decryption, signing, generating of pseudo-random numbers, among many others. Cryptography uses a key, or some sort of a password to either encrypt or decrypt a message that needs to be kept secret. This is made possible using two classes of key-based encryption and decryption algorithms, namely symmetric and asymmetric algorithms. The best known and the most widely used public key system is RSA. This algorithm comprises of three phases, which are the key generation phase, encryption phase, and the decryption phase. Owing to the advancement in computing technology, RSA is prone to some security risks, which makes it less secure. The following paper preview different proposals on different methods used to enhance the RSA algorithm and increase its security. Some of these enhancements include combining the RSA algorithm with Diffie-Hellman or ElGamal algorithm, modification of RSA to include three or four prime numbers, offline storage of generated keys, a secured algorithm for RSA where the message can be encrypted using dual encryption keys, etc.
This document provides an overview of cryptography. It discusses that cryptography is the practice of secure communication in the presence of others. The purpose of cryptography is to defend against hackers and industrial espionage while securing e-commerce, bank accounts, intellectual property, and avoiding liability. Cryptography provides authentication, privacy, integrity, and non-repudiation. Encryption converts plain text to cipher text using a key while decryption converts cipher text to plain text. Common cryptographic algorithms are secret key cryptography, public key cryptography, and hash functions. Secret key cryptography uses a private key for encryption while public key cryptography uses a public key exchanged over an insecure channel. Hash functions produce a checksum of data. AES encryption is now commonly used and
Performance analysis of transport layer basedhybrid covert channel detection ...IJNSA Journal
- The document discusses a performance analysis of a transport layer based hybrid covert channel detection engine. A hybrid covert channel combines two or more types of covert channels, such as a simple network covert channel in TCP and a subliminal channel in SSL.
- The authors designed a hybrid covert channel involving a subliminal channel in the Digital Signature Algorithm of SSL and a simple network covert channel manipulating TCP sequence numbers. They also designed a detection engine to analyze TCP packet headers and SSL signature components.
- The detection engine was tested on an experimental test bed with 5 nodes. Testing showed the detection rate varied between 70-97% while detection content was between 15-30%, depending on the number of covert channel invocations.
Modern-day computer security relies heavily on cryptography as a means to protect the data that we have
become increasingly reliant on. The main research in computer security domain is how to enhance the
speed of RSA algorithm. The computing capability of Graphic Processing Unit as a co-processor of the
CPU can leverage massive-parallelism. This paper presents a novel algorithm for calculating modulo
value that can process large power of numbers which otherwise are not supported by built-in data types.
First the traditional algorithm is studied. Secondly, the parallelized RSA algorithm is designed using
CUDA framework. Thirdly, the designed algorithm is realized for small prime numbers and large prime
number . As a result the main fundamental problem of RSA algorithm such as speed and use of poor or
small prime numbers that has led to significant security holes, despite the RSA algorithm's mathematical
soundness can be alleviated by this algorithm.
DATA SECURITY USING PRIVATE KEY ENCRYPTION SYSTEM BASED ON ARITHMETIC CODINGIJNSA Journal
Problem faced by today’s communicators is not only security but also the speed of communication and size of content.In the present paper, a scheme has been proposed which uses the concept of compression and data encryption. In first phase the focus has been made on data compression and cryptography. In the next phase we have emphasized on compression cryptosystem. Finally, proposed technique has been discussed which used the concept of data compression and encryption. In this first data is compressed to reduce the size of the data and increase the data transfer rate. Thereafter compress data is encrypted to provide security. Hence our proposed technique is effective that can reduce data size, increase data transfer rate and provide the security during communication.
Hop by-hop message authentication and source privacy in wire-copy-copySelva Raj
This document proposes a source anonymous message authentication scheme for wireless sensor networks that improves upon previous polynomial-based approaches. The proposed scheme uses elliptic curve cryptography to enable hop-by-hop message authentication without the threshold limitation of previous schemes, where the network becomes insecure after a certain number of messages. It aims to provide message authentication, integrity, and source privacy while being efficient and resilient to node compromise. Simulation results demonstrate the scheme has lower computational and communication overhead than polynomial-based methods under comparable security levels.
a performance analysis of generalized key scheme block cipher (gksbc) algorit...INFOGAIN PUBLICATION
Information is a commodity. Information has economic value and production of it incurs cost. Securing the information is posing a considerable challenge. The cryptographic technology plays a leading role in securing the owners right on produced information. A continuous development of new encryption systems are necessitated with the advancement in security and efficiency needs. Cryptanalytic studies have demonstrated the superior capability of recently developed Generalized Key Scheme Block Cipher (GKSBC) algorithm in terms of stability, execution time and encryption quality compared to standard security algorithms. This paper proposes to evaluate the enduring capacity of GKSBC to various cryptanalytic attacks viz., Brute – Force Attack, Differential Cryptanalysis, Integral Cryptanalysis, Linear Cryptanalysis and Rectangle attack. None of the traditional attacks are designed to decrypt GKSBC encryption as the use of key scheme is different in it and therefore robust to the conventional cryptanalytic attacks.
Introduction to and survey of TLS SecurityAaron Zauner
This document provides an introduction and survey of TLS security. It begins with an overview of motivation and background topics like information security, cryptography, and TLS. It then discusses TLS in more detail, including TLS records, the TLS handshake process, and cipher suites that combine cryptographic techniques. The document aims to cover the necessary basics to understand TLS security while recommending additional resources for deeper learning.
Selective jamming attack prevention based on packet hiding methods and wormholesIJNSA Journal
Because of the widespread use of wireless sensor ne
tworks in many applications, and due to the nature
of
the specifications of these networks (WSN) in terms
of wireless communication, the network contract
specifications, and published it in difficult envir
onments. All this leads to the network exposure to
many
types of external attacks. Therefore, the protectio
n of these networks from external attacks is consid
ered the
one of the most important researches at this time.
In this paper we investigated the security in wirel
ess
sensor networks, Limitations of WSN, Characteristic
Values for some types of attacks, and have been
providing protection mechanism capable of detecting
and protecting wireless sensor networks from a wid
e
range of attacks
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
An enhanced ip traceback mechanism for tracking the attack source using packe...IAEME Publication
The document discusses an enhanced IP traceback mechanism (EITM) to more efficiently trace the source of distributed denial of service (DDoS) attacks. EITM aims to reduce the number of packets required for traceback by improving existing linear and remainder packet marking schemes. It analyzes challenges in tracing attackers due to the stateless nature of the internet and proposes that an effective traceback scheme minimizes required packets. The main goal is a mechanism that needs a number of packets almost equal to the number of hops to reconstruct the attack path more efficiently.
Cryptography is the science of securing messages through encryption and decryption techniques to ensure confidentiality, integrity, and authentication. There are two main categories of cryptography - symmetric key cryptography where the same key is used by the sender and receiver, and asymmetric key cryptography where different public and private keys are used. Common techniques include substitution ciphers which replace letters with other letters or symbols, and transposition ciphers which rearrange the positions of letters in a message. The keys and algorithms used aim to protect data from unauthorized parties.
Cryptography is used to protect information by encrypting messages into an unreadable cipher text format. Modern cryptography uses either secret key cryptography, which uses a single key for encryption and decryption, or public key cryptography, which uses two mathematically related keys with one key to encrypt and the other to decrypt. The purpose of cryptography and security is to defend against hackers, industrial espionage, and to secure e-commerce, bank accounts, intellectual property, and avoid liability.
The document proposes two new autonomous system (AS) traceback techniques to identify the AS of the attacker launching a denial-of-service (DoS) attack. The first technique, called Prevent Overwriting AS Traceback (POAST), marks packets with a dynamic probability and protects marked packets from being overwritten. It encodes the attacking AS number instead of router IP addresses. The second technique, called Efficient AS Traceback (EAST), is also described but not in detail. Both are evaluated to have better performance than existing probabilistic packet marking techniques for traceback by reducing the number of packets and routers required.
A precise termination condition of the probabilistic packet marking algorithm...Mumbai Academisc
This document summarizes a research project that proposes a precise termination condition for the probabilistic packet marking (PPM) algorithm. The PPM algorithm allows routers to encode path information onto packets during a denial of service (DoS) attack, enabling the victim to reconstruct the attack graph. However, the existing PPM algorithm lacks a well-defined termination condition, and cannot handle multiple attackers. The proposed project aims to define a termination condition to ensure the reconstructed graph accurately represents the actual attack paths. It also extends the algorithm to support tracing packets from multiple attackers.
The document discusses network security and begins by noting that the slides can be freely used and modified if their source is mentioned. It then provides an overview of the goals and roadmap for Chapter 8, which covers principles of cryptography, message integrity, securing various network layers, firewalls, and intrusion detection systems. The chapter aims to explain the fundamentals of network security and how security is implemented in practice.
A Survey on Generation and Evolution of Various Cryptographic TechniquesIRJET Journal
This document summarizes previous research that has surveyed and compared various symmetric key cryptographic techniques. Several studies analyzed the performance of algorithms like DES, 3DES, AES, Blowfish, RC4 in terms of encryption/decryption time, memory usage, power consumption, throughput, and security against attacks. Most found that Blowfish had among the best performance overall, being fast and requiring few resources while maintaining strong security. AES generally required more processing power and time than alternatives like DES or RC4. The performance of algorithms could also vary based on file/data type, size, and the computing platform or operating system used.
1. The document discusses algorithm-substitution attacks (ASA) on encrypted protocols, where a government agency substitutes a user's encryption algorithm with its own algorithm to decrypt messages without the user's knowledge.
2. It analyzes how different types of symmetric encryption schemes are vulnerable or protected against ASA. Deterministic and stateful schemes provide more protection against ASA than randomized and stateless schemes.
3. The paper proposes that using deterministic, stateful symmetric encryption schemes can help achieve security against ASA by ensuring real and subverted ciphertexts are indistinguishable to attackers.
METHODS TOWARD ENHANCING RSA ALGORITHM : A SURVEYIJNSA Journal
Cryptography defines different methods and technologies used in ensuring communication between two parties over any communication medium is secure, especially in presence of a third part. This is achieved through the use of several methods, such as encryption, decryption, signing, generating of pseudo-random numbers, among many others. Cryptography uses a key, or some sort of a password to either encrypt or decrypt a message that needs to be kept secret. This is made possible using two classes of key-based encryption and decryption algorithms, namely symmetric and asymmetric algorithms. The best known and the most widely used public key system is RSA. This algorithm comprises of three phases, which are the key generation phase, encryption phase, and the decryption phase. Owing to the advancement in computing technology, RSA is prone to some security risks, which makes it less secure. The following paper preview different proposals on different methods used to enhance the RSA algorithm and increase its security. Some of these enhancements include combining the RSA algorithm with Diffie-Hellman or ElGamal algorithm, modification of RSA to include three or four prime numbers, offline storage of generated keys, a secured algorithm for RSA where the message can be encrypted using dual encryption keys, etc.
This document provides an overview of cryptography. It discusses that cryptography is the practice of secure communication in the presence of others. The purpose of cryptography is to defend against hackers and industrial espionage while securing e-commerce, bank accounts, intellectual property, and avoiding liability. Cryptography provides authentication, privacy, integrity, and non-repudiation. Encryption converts plain text to cipher text using a key while decryption converts cipher text to plain text. Common cryptographic algorithms are secret key cryptography, public key cryptography, and hash functions. Secret key cryptography uses a private key for encryption while public key cryptography uses a public key exchanged over an insecure channel. Hash functions produce a checksum of data. AES encryption is now commonly used and
Performance analysis of transport layer basedhybrid covert channel detection ...IJNSA Journal
- The document discusses a performance analysis of a transport layer based hybrid covert channel detection engine. A hybrid covert channel combines two or more types of covert channels, such as a simple network covert channel in TCP and a subliminal channel in SSL.
- The authors designed a hybrid covert channel involving a subliminal channel in the Digital Signature Algorithm of SSL and a simple network covert channel manipulating TCP sequence numbers. They also designed a detection engine to analyze TCP packet headers and SSL signature components.
- The detection engine was tested on an experimental test bed with 5 nodes. Testing showed the detection rate varied between 70-97% while detection content was between 15-30%, depending on the number of covert channel invocations.
Modern-day computer security relies heavily on cryptography as a means to protect the data that we have
become increasingly reliant on. The main research in computer security domain is how to enhance the
speed of RSA algorithm. The computing capability of Graphic Processing Unit as a co-processor of the
CPU can leverage massive-parallelism. This paper presents a novel algorithm for calculating modulo
value that can process large power of numbers which otherwise are not supported by built-in data types.
First the traditional algorithm is studied. Secondly, the parallelized RSA algorithm is designed using
CUDA framework. Thirdly, the designed algorithm is realized for small prime numbers and large prime
number . As a result the main fundamental problem of RSA algorithm such as speed and use of poor or
small prime numbers that has led to significant security holes, despite the RSA algorithm's mathematical
soundness can be alleviated by this algorithm.
DATA SECURITY USING PRIVATE KEY ENCRYPTION SYSTEM BASED ON ARITHMETIC CODINGIJNSA Journal
Problem faced by today’s communicators is not only security but also the speed of communication and size of content.In the present paper, a scheme has been proposed which uses the concept of compression and data encryption. In first phase the focus has been made on data compression and cryptography. In the next phase we have emphasized on compression cryptosystem. Finally, proposed technique has been discussed which used the concept of data compression and encryption. In this first data is compressed to reduce the size of the data and increase the data transfer rate. Thereafter compress data is encrypted to provide security. Hence our proposed technique is effective that can reduce data size, increase data transfer rate and provide the security during communication.
Hop by-hop message authentication and source privacy in wire-copy-copySelva Raj
This document proposes a source anonymous message authentication scheme for wireless sensor networks that improves upon previous polynomial-based approaches. The proposed scheme uses elliptic curve cryptography to enable hop-by-hop message authentication without the threshold limitation of previous schemes, where the network becomes insecure after a certain number of messages. It aims to provide message authentication, integrity, and source privacy while being efficient and resilient to node compromise. Simulation results demonstrate the scheme has lower computational and communication overhead than polynomial-based methods under comparable security levels.
The document discusses the spread of a zombie infection through technology. Links are provided to photos related to the infection and zombies. The infection is described as spreading from person to person through technology until most have become tech zombies. Some images show people interacting to try and control or cure the infection, while others depict individuals who have succumbed to becoming zombies themselves.
This document is from a website called Coupon Caboodle that provides local grocery store and restaurant coupons in Chicago, Illinois. Users can sign up by providing their contact information and selecting whether they want grocery store coupons, restaurant coupons, or both. After signing up, users will gain access to coupons in several formats including printable coupons, online coupon codes, mobile coupons, and the ability to load coupons onto savings cards.
Patrick Green completed five years in the U.S. Navy and wanted to continue working in the same field as an active duty member. He had always been interested in creating products that would be seen by thousands of people. After learning Photoshop and Illustrator on his own, he obtained a government job as a visual graphics coordinator, an amazing job that allowed him to pursue his passion for media and help him grow professionally and personally.
The document summarizes the findings of a study that tested over 5,000 web applications for vulnerabilities. It found that 99% of applications had at least one vulnerability, and 82% had at least one high or critical vulnerability. The most common vulnerability was cross-site scripting (61%). The banking industry had the fewest vulnerabilities while retail had the most. On average, each application contained 35 vulnerabilities.
La autora expresa su agradecimiento a su padre por siempre estar ahí para ella cuando se siente triste, respetar su forma de vestir y de ser, dejarla ser ella misma y divertirse como quiere, respetar sus espacios e interesarse por lo que le pasa, hacerla sentir que puede lograr grandes cosas, demostrarle que aunque se equivoque puede levantarse y seguir adelante, enseñarle que el amor la hace una mejor persona y aunque no siempre haya estado presente, saber que donde quiera que esté la protege y cuida mucho.
This was a presentation given during our CTEL away day. It describes the different channels which could be utilized to promote CTEL work and research and increase networking both internally and externally.
The objective of this project was to classify the given set of events as either tau-tau decay of Higgs Boson or as a background noise. This project was completed as a part of the Machine Learning module. We have come up with an ensemble model with XGBoosting and Random Forest classifiers to solve this problem.
Arvio siitä, kuinka työmarkkinajärjestöjen ehdotus kilpailukykysopimukseksi vaikuttaisi palkansaajien nettotuloihin ja tuloeroihin. Mukana myös arvio siitä, kuinka sos.vak.maksukorotukset ja lomarahaleikkaukset jakautuisivat miesten ja naisten välillä.
Ce rapport produit par WhiteHat en mai 2013 offre une vision pertinente des menaces web et des paramètres à prendre en compte pour assurer sécurité et disponibilité.
The document discusses covert channels, which are secret communication channels that violate security policies. It provides background on covert channels and defines them. It describes classic examples of covert channels from the 1980s that operated at the operating system level. It then discusses more modern covert channels that can operate over networks by encoding data in normally unused fields in network protocol headers, such as the IP ID or initial TCP sequence number fields. It notes that application proxies or firewalls can prevent many network covert channels by terminating and recreating network connections.
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
This document summarizes a research paper that proposes a new method for identifying denial of service (DoS) attacks using multivariate correlation analysis (MCA). The method involves three main steps: 1) generating basic features from network traffic, 2) using MCA to extract correlations between features and generate triangle area maps, and 3) using an anomaly-based detection mechanism to distinguish attacks from normal traffic based on differences from pre-generated normal profiles. The researchers evaluate their method on the KDD Cup 99 dataset and achieve moderate detection performance. However, they identify issues related to differences in feature scales that reduce detection of some attacks. They propose using statistical normalization to address this.
Different date block size using to evaluate the performance between different...IJCNCJournal
The different computer networks whether wired or wireless are becoming more popular with its high
security aspect. Different security algorithms and technique are using to avoid any aforementioned attacks.
One of these technique is a cryptography technique that makes the data as unreadable during the transfer
hence; there is no chance to reclaim the information. Presently, most of the users are using various media
types and internet to transfer the data but, it has the chance to retrieve the data by using these media types.
The perfect solution for this problem is to provide security on time-to-time basis; this stage is always
significant to the security related community discussions. This paper explains the comparison between the
run time of three different encryption algorithms which are DES, AES and Blowfish The compression
includes using different modes, data block size and different operation modes. As a result, Blowfish
algorithm followed by AES take less time for running compared to DES.
This document provides an overview of cryptographic algorithms and their uses. It begins with symmetric encryption, which uses a single secret key to encrypt and decrypt data, providing confidentiality. The most common symmetric algorithms are the Data Encryption Standard (DES) and the Advanced Encryption Standard (AES), which are block ciphers that encrypt data in fixed-size blocks. It also discusses stream ciphers, which encrypt data one element at a time. The document then covers secure hash functions, public-key encryption, digital signatures, and key management before concluding with an example application of encrypting stored data.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Efficient Data Mining Of Association Rules in Horizontally Distributed Databasesijircee
This document proposes a protocol to securely mine association rules from horizontally distributed databases in a privacy-preserving manner. The key aspects of the protocol are:
1) It uses a novel secure multi-party protocol to compute the union of private subsets held by different players, improving on prior work by avoiding commutative encryption and oblivious transfer.
2) It includes a protocol to test if an element held by one player is contained within a private subset held by another player.
3) Experimental results show the protocol has significantly lower communication and computation costs than prior work, while still protecting individual player's privacy beyond just the final mining results.
This document discusses surreptitiously weakening cryptographic systems through sabotage. It provides an overview of this domain using historical examples to develop a taxonomy for comparing different approaches to sabotage. The taxonomy characterizes weaknesses along dimensions like secrecy, utility, and scope. Understanding avenues for and defenses against deliberate cryptographic weakening is important given allegations that governments have subverted cryptographic standards.
This document provides an overview of an Nt1310 Unit 6 Powerpoint presentation with the following key points:
1. The setup phase takes in a security parameter and selects a bilinear group with generator b. It selects attributes, exponents, and generates public and master keys.
2. The key generation phase takes a set of attributes as input and produces a secret key equivalent to those attributes. It selects a random number and calculates the key.
3. The encryption phase is not described in detail in this excerpt.
This document provides an introduction and overview of information system security. It covers topics such as security attacks, services, and mechanisms. The document is divided into multiple units that cover encryption techniques like the Data Encryption Standard (DES) and advanced topics such as public key cryptosystems, hash functions, and IP security. DES encryption is explained in detail, covering aspects like its history, design, encryption process, key generation, decryption, and strengths/limitations. Feistel ciphers and their design principles are also summarized.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Design of Transport Layer Based Hybrid Covert Channel Detection Engineijasuc
Computer network is unpredictable due to information warfare and is prone to various
attacks. Such attacks on network compromise the most important attribute, the privacy. Most of such
attacks are devised using special communication channel called ``Covert Channel''. The word ``Covert''
stands for hidden or non-transparent. Network Covert Channel is a concealed communication path within
legitimate network communication that clearly violates security policies laid down. The non-transparency
in covert channel is also referred to as trapdoor. A trapdoor is unintended design within legitimate
communication whose motto is to leak information. Subliminal channel, a variant of covert channel works
similarly except that the trapdoor is set in a cryptographic algorithm. A composition of covert channel with
subliminal channel is the ``Hybrid Covert Channel''. Hybrid covert channel is homogenous or
heterogeneous mixture of two or more variants of covert channels either active at same instance or at
different instances of time. Detecting such malicious channel activity plays a vital role in removing threat
to the legitimate network. In this paper, we present a study of multi-trapdoor covert channels and
introduce design of a new detection engine for hybrid covert channel in transport layer visualized in TCP
and SSL.
Enhanced security for non English users of Wireless Sensor NetworksEswar Publications
Wireless Sensor Networks is an infrastructure less, self-configured, reprogrammable, energy-aware network used
in various applications. Many networks works on security of data including mainly ASCII values but not the non English end users. BDNA cryptography describes how to encrypt non English patterns but which leads to propagation of more bits transmitted means indirectly consumes more energy in WSN. In this we propose new steps to reduce the transmission of more bytes in the network. This gives high propagation speed in the network with minimum hash overhead.
This document describes the implementation of Caesar cipher encryption and decryption programs in Java, C++, and Python. It discusses the key steps in the encryption and decryption methods. The encryption method reads plaintext from an input file, encrypts each character using a Caesar cipher shift defined by a user-input key, and writes the ciphertext to an output file. The decryption method performs the reverse process, reading ciphertext and writing decrypted plaintext. Helper methods are used to encrypt/decrypt single characters. Flow charts illustrate the code logic and relationships between methods.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
SOM-PAD: Novel Data Security Algorithm on Self Organizing Map cscpconf
Data security is one of major challenges in the recent literature. Cryptography is the most
common phenomena used to secure data. One main aspect in cryptography is creating a hard to
guess cipher. Artificial Neural Networks (ANN) is one of the machine learning techniques
widely employed in several fields based on its characters, depending on the application area.
One of these fields is data security. The state of art in this paper is the use of self organizing
map (SOM) algorithm concept as a core idea to construct a pad; this pad is used to generate the
cipher at one end. At the other end of communication the same process is synchronized to
generate the same pad as the deciphering key. The security of the proposed model depends on
the complex nature of ANN's. The algorithm could be categorized under symmetric
cryptography, merging both stream and block cipher. A modified version of the same algorithm
also presented employs permutation and variable SOM neighborhoods. The proposal can be
applied over several file formats like videos, images, text files, data benchmarks, etc as show in
experimental results
Protecting location privacy in sensor networks against a global eavesdropperShakas Technologies
The document discusses techniques for providing location privacy in sensor networks against a global eavesdropper. It proposes four techniques - periodic collection, source simulation, sink simulation, and backbone flooding - to provide location privacy for monitored objects (source location privacy) and data sinks (sink location privacy). These techniques provide trade-offs between privacy, communication cost, and latency. Analysis and simulation demonstrate that the proposed techniques are efficient and effective for providing source and sink location privacy in sensor networks.
File transfer with multiple security mechanismShubham Patil
The system enhances the security and the data confidentiality between the users and receiver by the two-layer encryption mechanism and the QR code for verification. The system consists of three main components which are very important to providing the security between sender and receiver while transmitting the data
Similar to Compression and information leakage of plaintext (20)
Les francais et la protection des données personnellesBee_Ware
Cette enquête de l’institut CSA fournit un état des lieux des préoccupations liées à la protection des données personnelles en France. Réalisée auprès de plus de 1000 personnes, cette étude met en évidence la prise de conscience des français face aux risques d’usurpation d’identité ou de vol de données sensibles.
This document summarizes DDoS threat trends from 2013 to early 2014 based on attacks seen by Incapsula. Key findings include:
- 81% of network attacks in the last 90 days used multiple vectors simultaneously, with over a third employing 3 or more vectors. This multi-vector approach allows attackers to bypass defenses.
- Large SYN floods combined with regular SYN floods ("SYN combo attacks") accounted for around 75% of large-scale network attacks above 20Gbps.
- NTP amplification attacks increased significantly in early 2014 and became the most common vector for large attacks in February 2014.
- Application layer attacks increased 240% from 2013, with over half originating from India, China, and Iran
Top ten big data security and privacy challengesBee_Ware
The document discusses the top 10 security and privacy challenges of big data. It begins by explaining how big data has expanded through streaming cloud technology, rendering traditional security mechanisms inadequate. It then outlines a 3-step process used to identify the top 10 challenges: 1) interviewing CSA members and reviewing trade journals to draft an initial list, 2) studying published solutions, and 3) characterizing remaining problems as challenges if solutions did not adequately address problem scenarios. The top 10 challenges are then grouped into 4 aspects: infrastructure security, data privacy, data management, and integrity and reactive security. The first challenge discussed in detail is securing computations in distributed programming frameworks.
This report provides an overview of global compliance with the Payment Card Industry Data Security Standard (PCI DSS) based on hundreds of assessments conducted between 2011-2013. The key findings are that only around 11% of companies assessed were fully compliant with all 12 PCI DSS requirements, and the report identifies areas where organizations commonly struggle with compliance. It recommends that organizations view PCI compliance as an ongoing process that requires executive sponsorship and should be part of wider governance, risk, and compliance efforts.
Les entreprises européennes sont elles bien armées pour affronter les cyber a...Bee_Ware
Réalisée par Steria, cette étude présente les nouvelles attaques informatiques et leur impact en termes business, financier et d’atteinte à la réputation.
Maitriser la ssi pour les systèmes industrielsBee_Ware
Ce document présente les enjeux sécuritaires liés aux systèmes informatiques industriels. Découvrez les mythes, vulnérabilités et impacts potentiels ainsi qu’une check list de bonnes pratiques à suivre.
1) The document discusses a European consumer survey on attitudes toward biometric technology, which authenticates people using physical characteristics like fingerprints, face, iris, and veins. 2) The majority of citizens across European countries support using biometrics to identify criminals and authenticate identity cards/passports. 3) However, fewer than half of European citizens favor replacing bank PIN numbers with biometrics due to privacy concerns over this highly innovative technology.
This document summarizes the findings of a study on managing complexity in identity and access management (IAM) conducted by Ponemon Institute. Some key findings:
1) Most organizations find their IAM processes overly complex and difficult to manage, with over 300 information resources and 1200 access requests per month on average.
2) Respondents believe access changes are not fulfilled in a timely manner, access requests are not always verified against policies, and IAM policies are not strictly enforced.
3) The costs of IAM failures are estimated at $105 million annually on average due to lost productivity, revenue, and technical support costs.
4) Growth of unstructured data, mobile devices, regulations,
The document provides 8 predictions for cybersecurity threats in 2014:
1) Advanced malware volume will decrease but attacks will become more targeted and stealthy.
2) A major data-destruction attack such as ransomware will successfully target organizations.
3) Attackers will increasingly target cloud data rather than enterprise networks.
4) Exploit kits like Redkit and Neutrino will struggle for dominance following the arrest of the Blackhole exploit kit author.
5) Java vulnerabilities will remain highly exploitable and exploited with expanded consequences.
6) Attackers will use professional social networks like LinkedIn to target executives and organizations.
7) Cybercriminals will target weaker links in organizations
Guide de mise en oeuvre d'une authentification forte avec une cpsBee_Ware
Destiné plus spécifiquement aux chefs de projets et aux architectes techniques et applicatifs, ce guide présente la mise en œuvre d’une authentification forte avec une carte CPS (Carte de Professionnels de Santé).
The 2013 Cost of Data Breach Study: France found that the average cost of a data breach in France increased from €122 per lost or stolen record in 2011 to €127 per record in 2012. The total average organizational cost of a data breach also rose over this period, from €2.55 million to €2.86 million. Malicious attacks were the most common cause of breaches, accounting for 42% of cases. Lost business costs, which include customer churn, increased sharply from €0.78 million in 2011 to €1.19 million in 2012. Certain organizational factors like having an incident response plan in place were found to lower the costs of a breach.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Harnessing the Power of NLP and Knowledge Graphs for Opioid Research
Compression and information leakage of plaintext
1. Compression and Information Leakage of
Plaintext
John Kelsey, Certicom
(kelsey.j@ix.netcom.com)
1 Introduction
Cryptosystems like AES and triple-DES are designed to encrypt a sequence of
input bytes (the plaintext) into a sequence of output bytes (the ciphertext) in
such a way that the output carries no information about that plaintext except its
length. In recent years, concerns have been raised about ”side-channel” attacks
on various cryptosystems–attacks that make use of some kind of leaked informa-
tion about the cryptographic operations (e.g., power consumption or timing) to
defeat them. In this paper, we describe a somewhat different kind of side-channel
provided by data compression algorithms, yielding information about their in-
puts by the size of their outputs. The existence of some information about a
compressor’s input in the size of its output is obvious; here, we discuss ways to
use this apparently very small leak of information in surprisingly powerful ways.
The compression side-channel differs from side-channels described in [Koc96]
[KSHW00] [KJY00] in two important ways:
1. It reveals information about plaintext, rather than key material.
2. It is a property of the algorithm, not the implementation. That is, any im-
plementation of the compression algorithm will be equally vulnerable.
1.1 Summary of Results
Our results are as follows:
1. Commonly-used lossless compression algorithms leak information about the
data being compressed, in the size of the compressor output. While this
would seem like a very small information leak, it can be exploited in sur-
prisingly powerful ways, by exploiting the ability of many compression algo-
rithms to adapt to the statistics of their previously-processed input data.
2. We consider the ”stateless compression side-channel,” based on the compres-
sion ratio of an unknown string without reference to the rest of the message’s
contents. We also consider the much more powerful ”stateful compression
side-channel,” based on the compression ratio of an unknown string, given
information about the rest of the message.
3. We describe a number of simple attacks based mainly on the stateless side-
channel.
4. We describe attacks to determine whether some string S appears often in a
set of messages, using the stateful side-channel.
2. 265
5. We describe attacks to extract a secret string S that is repeated in many
compressed messages, under partial chosen plaintext assumptions, using the
stateful side-channel.
6. We consider countermeasures that can make both the stateless and the state-
ful side-channels substantially harder to exploit, and which may thus block
some of these attacks.
7. We discuss the implications of these results, in light of the widespread use of
compression with encryption, and the ”folk wisdom” suggesting that adding
compression to an encryption application will increase security.
1.2 Practical Impact of Results
Compression algorithms are widely used in real-world applications, and have a
large impact on those applications’ performance in terms of speed, bandwidth
requirements, and storage requirements. For example, PGP and GPG compress
using the Zip Deflate algorithm before encrypting, IPSec can use IPComp to
compress packets before encrypting them, and both the SSH and TLS protocols
support an option for on-the-fly compression.
Potential security implications of using compression algorithms are of prac-
tical importance to people designing systems that might use both compression
and encryption.
The side-channel attacks described in this paper can have a practical impact
on security in many situations. However, it is important to note that these
attacks have little security impact on, say, a bulk encryption application which
compresses data before encrypting. To a first-order approximation, the attacks in
this paper are described in decreasing order of practicality. The string-extraction
attacks are not likely to be practical against many systems, since they require
such a specialized kind of partial chosen-plaintext access. The string-detection
attacks have less stringent requirements, and so are likely to be useful against
more systems. The passive information leakage attacks are likely practical to
use against any system that uses compression and encryption together, and for
which some information about input size is available.
In a broader sense, the results in this paper point to the need to consider the
impact of any pre- or post-processing done along with encryption and authenti-
caton. For example, we have not considered timing channels from compression
algorithms in this paper, but such channels will clearly exist for some compres-
sion algorithms, and must also exist for many other kinds of processing done
on plaintext before it is sent, or ciphertext after it is received and decrypted.
Similarly, anything done to the decrypted ciphertext of a message before au-
thenticating the result is subject to reaction attacks: attacks in which changes in
the ciphertext can cause different error messages or other behavior on the part
of the receiver, depending on some secret information that the attacker seeks to
reveal. (For decompressors which must terminate decompression with an error
for some possible inputs, for example, there are serious dangers with respect to
reaction attacks, or even with buffer-overrun or other related attacks.)
3. 266
1.3 Previous Work
Although existence of the stateless compression side channel is obvious, we have
seen very little reference to it in the literature. Nearly all published works dis-
cussing compression and encryption describe how compression improves the se-
curity of encryption.
One of the attendants of FSE2002 brought [BCL02] to our attention; in this
article, researchers had noticed that they could use the compression ratio of
a file to determine the language in which it was written in. This is the same
phenomenon on which is based one of our stateless side channels.
1.4 Guide to the Paper
The remainder of this paper is arranged as follows: First, we discuss commonly-
used compression methods, and how they interact with encryption. Next, we
describe the side-channel which we will use in our attacks. We then consider
several kinds of attack, making use of this side channel. We conclude with a
discussion of various complications to the attacks, and possible defenses against
them.
2 Lossless Compression Methods and the Compression
Side-Channels
The goal of any compression algorithm (note: in this paper, we consider only
lossless compression algorithms) is to reduce the redundancy of some block of
data, so that an input that required R bits to encode can be written as an out-
put with fewer than R bits. All lossless compression algorithms work by taking
advantage of the fact that not all messages of R bits are equally likely to be
sent. These compression algorithms make a trade-off: they effectively encode the
higher probability messages with fewer bits, while encoding the lower proba-
bility messages with more bits. The compression algorithms in widespread use
today typically use two assumptions to remove redundancy: They assume that
characters and strings that have appeared recently in the input are likely to
recur, and that some values (strings, lengths, and characters) are more likely to
occur than others. Using these two assumptions, these algorithms are effective
at compressing a wide variety of commonly-used data formats.
Many compression algorithms (and specifically, the main one we will con-
sider here) make use of a “sliding window” of recently-seen text. Strings that
appear in the window are encoded by reference to their position in the window.
Other compression algorithms keep recently-seen strings in an easily-searched
data structure; strings that appear in that structure are encoded in an efficient
way.
Essentially all compression algorithms make use of ways to efficiently encode
symbols (characters, strings, lengths, dictionary entries) of unequal frequency,
so that commonly-occurring symbols are encoded using fewer bits than rarely-
occurring symbols.
4. 267
For the purposes of this paper, it is necessary to understand three things
about these compression functions:
1. At any given point in the process of compressing a message, there are gener-
ally many different input strings of the same length which will compress to
different lengths. This inherently leaks information about these input strings.
2. The most generally useful compression algorithms encode the next few bytes
of input in different ways (and to different lengths), depending on recently-
seen inputs.
3. While a single “pass” of a compression algorithm over a string can leak only
a small amount of data about that string, multiple “passes” with different
data appearing before that string can leak a great deal of data about that
string.
This summary necessarily omits a lot of detail about how compression al-
gorithms work. For a more complete introduction to the techniques used in
compression algorithms, see [Sal97a] or [CCF01a].
2.1 Interactions with Encryption
Essentially all real-world ciphers output data with no detectable redundancy.
This means that ciphertext won’t compress, and so if a system is to benefit from
compression, it must compress the information before it is encrypted.
The “folk wisdom” in the cryptographic community is that adding compres-
sion to a system that does encryption adds to the security of the system, e.g.,
makes it less likely that an attacker might learn anything about the data be-
ing encrypted. This belief is generally based on concerns about unicity distance,
keysearch difficulty, or ability of known- or chosen-plaintext attacks. We believe
that this folk wisdom, though often repeated in a variety of sources, is not gener-
ally true; adding compression to a competently designed encryption system has
little real impact on its security. We base this on three observations:
Unicity distance is irrelevant. The unicity distance of an encryption system
is the number of bits of ciphertext an attacker must see before he has enough
information that it is even theoretically possible to determine the key. Com-
pression algorithms, decreasing the redundancy of plaintexts, clearly increase
unicity distance. However, this is irrelevant for practical encryption systems,
where a single 128-bit key can be expected to encrypt millions of bytes of
plaintext.
Keysearch difficulty is only slightly increased. Since most export restric-
tions on key lengths have gone away, we can expect this to become less and
less relevant over time, as existing fielded algorithms with 40- and 56-bit
key lengths are replaced with triple-DES or AES. At any rate, for systems
with keys short enough for brute force searching, adding general-purpose
compression algorithms to the system seems like a singularly unhelpful way
to fix the problem. Standard compression algorithms usually include fixed
headers, and tend to be pretty predictable in their first few bytes of output.
5. 268
It seems unlikely that adding such a compression algorithm, even with fixed
headers removed, increases the difficulty of keysearch by more than a factor
of 10 to 100. Switching to a stronger cipher is a far cheaper solution that
actually solves the problem.
Standard algorithms not that helpful. Compression with some additional
features to support security (such as a randomized initial state) can make
known-plaintext attacks against block ciphers much harder. However, off-
the-shelf compression algorithms provide little help against known-plaintext
attacks (since an attacker who knows the compression algorithm and the
plaintext knows the compressor output). And while chosen-plaintext attacks
can be made much harder by specially designed compression algorithms,
they are also made much harder, at far lower cost, by the use of standard
chaining modes.
In summary, compression algorithms add very little security to well-designed
encryption systems. Such systems use keys long enough to resist keysearch at-
tack and chaining modes that resist chosen-plaintext attack. The real reason for
using compression algorithms isn’t to increase security, but rather to save on
bandwidth and storage. As we will disucuss below, this real advantage needs
to be balanced against a (mostly academic) risk of attacks on the system, such
as those described below, based on information leakage from the compression
algorithm.
3 The Compression Side-Channel and our Attack Model
In this section, we describe the compression side channel in some detail. We also
consider some situations in which this side channel might leak important data.
Any lossless compression algorithm must compress different messages by dif-
ferent amounts, and indeed must expand some possible messages. The compres-
sion side channel we consider in this paper is simply the different amount by
which different messages are compressed. When an unknown string S is com-
pressed, and an attacker sees the input and output sizes, he has almost certainly
learned only a very small amount about S. For almost any S, there will be a
large set of alternative messages of the same length, which would also have had
the same size of compressor output. Even so, some small amount of information
is leaked by even this minimal side-channel. For example, an attacker informed
that a file of 1MB had compressed to 1KB has learned that the original file must
have been extremely redundant.
Fortunately (for cryptanalysts, at least), compression algorithms such as
LZW and Zip Deflate adapt to the data they are fed. (The same is true of many
other compression algorithms, such as adaptive markov coding and Burrows-
Wheeler coding, and even adaptive Huffman coding of symbols.) As a message
is processed, the state of the compressor is altered in a predictable way, so that
strings of symbols that have appeared earlier in the message will be encoded more
efficiently than strings of symbols that have not yet appeared in the message.
This allows an enormously more powerful side-channel when the unknown string
6. 269
S is compressed with many known or chosen prefix strings, P0, P1, ..., Pn−1. Each
prefix can put the compressor into a different state, allowing new information
to be extracted from the compressor output size in each case. Similarly, if a
known or chosen set of suffixes, Q0, Q1, ..., Qn−1 is appended to the unknown
string S before compression, the compressor output sizes that result will each
carry a slightly different piece of information about S, because those suffixes
with many strings in common with S will compress better than other suffixes,
with fewer strings in common with S. This can allow an attacker to reconstruct
all of S with reasonably high probability, even when the compressor output sizes
for different prefixes or suffixes differ only by a few bytes. In this situation, it is
quite possible for an attacker to rule out all incorrect values of S given enough
input and output sizes for various prefixes, along with knowledge or control over
the prefix values. Further, an attacker can build information about S gradually,
refining a partial guess when the results of each successive compressor output
are seen.
A related idea can be used against a system that compresses and encrypts,
but does not strongly authenticate its messages. The effect of altering a few bytes
of plaintext (through a ciphertext alteration) will be very much dependent on
the state of the decompressor both before and after the altered plaintext bytes
are processed. The kind of control exerted over the compressor state is different,
but the impact is similar. However, we do not consider this class of attack in
this paper.
3.1 Assumptions and Models
We will make the following assumptions in the remainder of this paper:
1. Each message is processed by first compressing it, then encrypting it.
2. The attacker can learn the precise compressor output length from the ci-
phertext.
3. The attacker somehow knows the precise input length, or (in some cases) at
least the approximate input length.
In the sections that follow, we will consider three basic classes of attacks:
First, we will consider purely passive attacks, where the attacker simply observes
the ciphertext length and compression ratio, and learns information that should
have been concealed by the encryption mechanism. Second, we will consider
a kind of limited chosen-plaintext attack, in which the attacker attempts to
determine whether and approximately how often some string appears in a set
of messages. Third, we will consider a much more demanding kind of chosen-
plaintext attack, in which the attacker must make large numbers of chosen or
adaptive-chosen plaintext queries, in hopes of extracting a whole secret string.
4 Data Information Leakage
In this section, we consider purely passive attacks; ways that an attacker can
learn some information he should not be able to learn, by merely observing
7. 270
the ciphertexts and corresponding compression ratio. One general property of
these attacks is that they are quite hard to avoid, without simply eliminating
compression from the system. However, it is also worth noting that most of these
attacks are not particularly devastating under most circumstances.
4.1 Highly Redundant Data
Consider a large file full of binary zeros or some other very repetitive contents.
Encrypting this under a block cipher in ECB-mode would reveal a lot of re-
dundancy; this is one reason why well-designed encryption systems use block
ciphers in one of the chaining modes. Using CBC- or CFB-mode, the encrypted
file would reveal nothing about the redundancy of the plaintext file.
Compressing before encryption changes this behavior. Now, a highly-redundant
file will compress extremely well. The very small ciphertext will be sufficient,
given knowledge of the original input size, to inform an attacker that the plain-
text was highly redundant.
We note that this information leak is not likely to be very important for most
systems. However:
1. Chaining modes prevent this kind of information leakage, and this is, in fact,
one very good reason to use chaining modes with block ciphers.
2. In some situations, leaking the fact that highly-redundant data is being
transmitted may leak some very important information. (An example might
be a compressed, encrypted video feed from a surveilance camera–an attacker
could watch the bandwidth consumed by the feed, and determine whether
the motion of his assistant trying to get past the camera had been detected.)
4.2 Leaking File or Data Types
Different data formats compress at different ratios. A large file containing ASCII-
encoded English text will compress at a very different ratio from a large file
containing a Windows executable file. Given knowledge only of the compression
ratio, an attacker can thus infer something about the kind of data being trans-
mitted. This is not so trivial, and may be relevant in some special circumstances.
This may be resisted by encoding the data to be transmitted in some other
format, at the cost of losing some of the advantage of compression.
4.3 Compression Ratio as a Checksum
Consider a situation where an attacker knows that one of two different known
messages of equal length is to be sent. (For example, the two message might
be something like ”DEWEY DEFEATS TRUMAN!” or ”TRUMAN DEFEATS
DEWEY!”.) If these two messages have different compression ratios, the attacker
can determine precisely which message was sent. (For this example, Python’s
ZLIB compresses ”TRUMAN DEFEATS DEWEY!” slightly better than ”DEWEY
DEFEATS TRUMAN!”)
8. 271
More generally, if the attacker can enumerate the set of possible input mes-
sages, and he knows the compression algorithm, he can use the length of the
input, plus the compression ratio, as a kind of checksum. This is a very straight-
forward instance of the side-channel; an attacker is able, by observing compres-
sion ratios, to rule out a subset of possible plaintexts.
4.4 Looped Input Streams
Sometimes, an input stream may be “looped,” so that after R bytes, the message
begins repeating. This is the sort of pattern that encryption should mask, and
without compression, using a standard chaining mode will mask it. However, if
the compression ratio is visible to an attacker, he will often be able to determine
whether or not the message is looping, and may sometimes be able to determine
its approximate period.
There are two ways the information can leak. First, if the period of the looping
is shorter than the “sliding window” of an LZ77-type compression algorithm, the
compression ratio will suddenly become very good. Second, if the period is longer
than the sliding window, the compression ratios will start precisely repeating.
(Using an LZW-type scheme will leave the compression ratios improving each
time through the repeated data, until the dictionary fills up.)
5 String Presence Detection Attacks
The most widely used lossless compression algorithms adapt to the patterns in
their input, so that when those patterns are repeated, those repetitions can be
encoded very efficiently. This allows a whole class of attacks to learn whether
some string S is present within a sequence of compressed and encrypted mes-
sages, based on using either known input data (some instances where S is known
to have appeared in messages) or chosen input (where S may be appended to
some messages before they’re compressed and encrypted).
All the attacks in this section require knowledge or control of some part of
a set of messages, and generally also some knowledge of the kind of data being
sent. They also all require knowledge of either inputs or compressor outputs, or
in some cases, compression ratios.
5.1 Detecting a Document or Long String with Partial Chosen
Plaintext
The attacker wants to determine whether some long string S appears often in a
set of messages M0, M1, ..., MN−1.
The simplest attack is as follows:
1. The attacker gets the compressed, encrypted versions of all of the Mi. From
this he learns their compressed output lengths.
9. 272
2. The attacker requests the compressed, encrypted versions of Mi = Mi, S,
for all Mi. That is, he requests the compressed and encrypted results of
appending S to each message.
3. The attacker determines the length of S after compression with the scheme
in use.
4. The attacker observes Mi − Mi. If these values average substantially less
than the expected length of S after compression, it is very likely that S is
present in many of these messages.
5.2 Partial Known Input Attack
A much more demanding and complicated attack may be possible, given only
the leakage of some information from each of a set of messages. The attacker can
look for correlations between the appearance of substrings of S in the known part
of each message, and the compressed length of the message; based on this, he can
attempt to determine whether S appears often in those messages. This attack
is complicated by the fact that the appearance of substrings of S in the known
part of the message may be correlated with the presence of S in the message.
(Whether it is correlated or not requires more knowledge about how the messages
are being generated, and the specific substrings involved. For example, if S is
“global thermonuclear war”, the appearance of the substring “thermonuclear”
is almost certainly correlated with the appearance of S later in the message.)
A more useful version of an attack like this might be a case where several files
are being combined into an archive and compressed, and the attacker knows one
of the files. Assuming the other files aren’t chosen in some way that correlates
their contents with the contents of the known file, the attacker can safely run
the attack.
6 String Extraction Attacks
In this section, we consider ways an attacker might use the compression side
channel to extract some secret string from the compressor inputs. This kind
of attack requires rather special conditions, and so is much less practical than
the other attacks considered above. However, in some special situations, these
attacks could be made to work. More importantly, these attacks demonstrate a
separate path for attacking systems, despite the use of very strong encryption.
The general setting of these attacks is as follows: The system being attacked
has some secret string, S, which is of interest to the attacker. The attacker is
permitted to build a number of requested plaintexts, each using S, without ever
knowing S. For example, the attacker may choose a set of N prefixes, P0,1,...,N−1,
and request N messages, where the ith message is Pi||S.
6.1 An Adaptive Chosen Input Attack
Our first attack is an adaptive chosen input attack. We make a guess about the
contents of the first few characters of the secret string, and make a set of queries
10. 273
based on this guess. The output lengths of the results of these queries should be
smaller for correct guesses than for incorrect guesses.
We construct our queries in the form
Query = prefix + guess + filler + prefix + S
where
Query is the string which the target of the attack is convinced to compress and
encrypt.
prefix is a string that is known not to occur in S.
filler is another string known not to occur in S, and with little in common with
prefix.
S is the string to be recovered by the attacker.
The idea behind this attack is simple: Suppose the prefix is 8 characters
long, and the guess is another 4 characters long. A correct guess guarantees that
the query string contains a repeated 12-character substring; a good compression
algorithm (and particularly, a compression scheme based on a sliding window,
like Zip Deflate) will encode this more efficiently than queries with incorrect
guesses, which will contain a string with slightly less redundancy. When we have
a good guess, this attack can be iterated to guess another four digits, and so on,
until all of S has been guessed.
Experimental Results We implemented this attack using the Python Zlib
package, which provides access to the widely-used Zip Deflate compression al-
gorithm. The search string was a 16-digit PIN, and the guesses were four (and
later five) digits each. Our results were mixed: it was possible to find the correct
PIN using this attack, but we often would have to manually make the decision to
backtrack after a guess. There were several interesting complications that arose
in implementing the attack:
1. The compression algorithm is a variant of a sliding-window scheme, in which
it is not always guaranteed that the longest match in the window will be
used to encode a string. More importantly, this is a two-pass algorithm; the
encoding of strings within the sliding window is affected by later strings as
well as earlier ones, and this can change the output length enough to change
which next four digits appear to be the best match to S[Whi02].
2. Some guesses themselves compress very well. For example, the guess “0000”
compresses quite well the first time it occurs.
3. The actual “signal” between two close guesses (e.g., “1234” and “1235”) is
very close, and is often swamped by the “noise” described above.
4. To make the attack work reasonably well, it is necessary to make each piece
of the string guessed pretty long. For our implementation, five digits worked
reasonably well.
5. Some backtracking is usually necessary, and the attack doesn’t always yield
a correct solution.
11. 274
6. It turns out to also be helpful to add some padding at the end of the string,
to keep the processing of the digits uniform.
All of these problems appear to be pretty easy to solve given more queries
and more work. However, we dealt with them more directly by developing a
different attack—one that requires only chosen-plaintext access, not adaptive
chosen plaintext access.
6.2 A Chosen Input Attack
The adaptive chosen input attack seems so restrictive that it is hard to see how
it might be extended to a simple chosen or known plaintext attack. However,
we can use a related, but different approach, which gives us a straightforward
chosen input attack.
The attack works in two phases:
1. Generate a list of all possible subsequences of the string S, and use the
compression side-channel to put them in approximate order of likelihood of
appearing in S.
2. Piece together the subsequences that fit end-to-end, and use this to recon-
struct a set of likely candidate values for S.
The subsequences can be tested in the simplest sense by making queries of
the form
Query = Guess + S
However, to avoid interaction between the guess and the first characters of
S, it is useful to include some filler between them.
Experimental Results We were able to implement this attack, with about a
70% success rate for pseudorandomly-generated strings S of 16 digits each, using
the Python Zlib. The attack generates a list of the 20 top candidates for S, and
we counted it as a success if any of those 20 candidates was S.
There were several tricks we discovered in implementing this attack:
1. In building the queries, it was helpful to generate padding strings before
the guessed subsequence, between the guess and the string S, and after the
string.
2. It was very helpful to generate several different padding strings, of different
lengths, and to sum the lengths of the compressed strings resulting from
several queries of the same guess. This tended to average out some of the
“noise” of the compression algorithm.
3. There are pathological strings that cause the attack to fail. For example,
the string “0000000123000000” will tend to end up with guesses that piece
together instances of “00000”.
12. 275
7 Caveats and Countermeasures
The attacks described above make a number of simplifying assumptions. In this
section, we will discuss some of those assumptions, and the implications for our
attacks when the assumptions turn out to be false. We will also consider some
possible countermeasures.
7.1 Obscuring the Compressor Input Size
The precise size of the input may be obscured in some cases. Naturally, some
kind of information about relative compression ratios is necessary for the attack
to work. However, approximate input information will often be good enough, as
when the compression ratio is being used as the side channel. An approximate
input size will lead to an approximate compression ratio, but for any reasonably
large input, the difference between the approximate and exact compression ratios
will be too small to make any difference for the attack.
One natural way for an attacker to learn approximate input size is for the
process generating the input to the compressor to have either some known con-
stant rate of generating input, or to have its operations be visible (e.g., because
it must wait for the next input, which may be observed, before generating the
next output).
7.2 Obscuring the Compressor Output Size
Some encryption modes may automatically pad the compresor output to the
next full block for the block cipher being used. Others may append random
padding to resist this or other attacks. For example, some standard ways of
doing CBC-mode encryption include padding to the next full cipher block, and
making the last byte of the padding represent the total number of bytes of
padding used. This gives the attacker a function of the compressor output size,
(len+1)/blocksize ×blocksize. These may slightly increase the amount of work
done during our attacks, but don’t really block any of the attacks.
A more elaborate countermeasure is possible. A system designer may decide
to reduce the possible leakage through the compressor to one bit per message,
as follows:
1. Decide on a compression ratio that is acceptable for nearly all messages, and
is also attainable for nearly all messages.
2. Send the uncompressed version of any messages that don’t attain the desired
compression ratio.
3. Pad out the compressor output of messages that get at least the desired com-
pression ratio, so that the message effectively gets the desired compression
ratio.
This is an effective countermeasure against some of our attacks (for example,
it makes it quite hard to determine which file type that compresses reasonably
13. 276
well has been sent), but it does so at the cost of losing some compression per-
formance. Against our chosen-input attacks, this adds a moderate amount of
difficulty, but doesn’t provide a complete defense.
7.3 Obscuring the Compressor Internal State
It is possible to obscure the internal state of the compressor, in a number of
simple ways, including initializing the compressor in a random state, or inserting
occasional random blocks of text during the compression operation. In either
case, this can cause problems with some of our attacks, because of the lack of
precise information about the state of the compressor when an unknown string is
being processed. General compression ratios are unlikely to be affected strongly
by such countermeasures, however, so the general side channel remains open.
7.4 Preprocessing the Text
The text may be preprocessed in such a way that compression is affected in a
somewhat unpredictable way. For example, it is easy to design a very weak stream
cipher, which generates a keystream with extremely low Hamming weight. Ap-
plying this kind of stream cipher to the input before compression would degrade
the compression slightly, in a way not known ahead of time by any attacker.
By allowing the Hamming weight of the keystream to be tunable, we could get
tunable degradation to the compression.
8 Conclusions
In this paper, we have described a side-channel in widely-used lossless compres-
sion algorithms, which permit an attacker to learn some information about the
compressor input, based only on the size of the compressor output and whatever
additional information about other parts of the input may be available.
We have discussed only a small subset of the available compression algo-
rithms, and only one possible side channel (compression ratio). Some interesting
directions for future research include:
1. Timing side-channels for compression algorithms.
2. Attacking other lossless compression algorithms, such as adaptive Huffman
encoding, adaptive Markov coders, and Burrows-Wheeler block sorting (with
move-to-front and Huffman or Shannon-Fano coding) with this side channel.
Adaptive Huffman and Markov coders can be attacked using techniques very
similar to the ones described above. Burrows-Wheeler block sorting appears
to require rather different techniques, though the same side channels clearly
exist and can be exploited.
3. Attacking lossy compression algorithms for image, sound, and other data
with this side channel.
14. 277
4. Attacking lossy image compression by trying to use disclosed parts of a
compressed image to learn undisclosed parts of the same image, as might be
useful for redacted scanned documents.
5. Reaction attacks against decompressors, such as might be useful when a sys-
tem cryptographically authenticates plaintext, then compresses and encrypts
it. This might lead either to software faults (a change in ciphertext leading
to a buffer overrun, for example) or to more general leakage of information
about the encryption algorithm or plaintext.
9 Acknowledgements
The author wishes to thank Paul Crowley, Niels Ferguson, Andrew Fernandes,
Pieter Kasselman, Yoshi Kohno, Ulrich Kuehn, Susan Langford, Rene Struik,
Ashok Vadekar, David Wagner, Doug Whiting, and the many other people who
made helpful comments after seeing these results presented at Certicom, at the
Crypto 2001 Rump Session, and at FSE2002. The author also wishes to thank
the anonymous referees for several useful suggestions that improved the paper.
References
[BCL02] Benedetto, Caglioti, and Loreto, Physical Review Letters, 28 January 2002.
[CCF01a] Usenet group comp.compression FAQ file, available at
http://www.faqs.org/faqs/compression-faq/, 2001.
[KJY00] Kocher, Jaffe, Jun, “Differential power analysis: Leaking secrets,” in Ad-
vances in Cryptology – CRYPTO’99, Springer-Verlag, 1999
[Koc96] Kocher, “Timing Attack on Implementations of Diffie-Hellman, RSA, DSS
and other systems,” in Advances in Cryptology - CRYPTO ’96, Springer-
Verlag, 1996.
[KSHW00] Kelsey, Schneier, Wagner, Hall, “Side Channel Cryptanalysis of Product
Ciphers,” in Advances in Cryptology–ESORICS 96, Springer-Verlag, 1996.
[Sal97a] David Salomon, Data Compression: The Complete Reference, Springer-
Verlag, 1997.
[Whi02] Doug Whiting, personal communication, 2002.