Comparative analysis of efficiency of fibonacci random number generator algorithm and gaussian random number generator algorithm in a cryptographic system.
This document compares the efficiency of the Fibonacci random number generator and Gaussian random number generator algorithms in cryptographic systems. It discusses how random numbers are important for cryptography and describes statistical tests used to analyze the randomness of numbers generated by each algorithm. The research concluded that the Fibonacci random number generator passed the chi-square and Kolmogorov-Smirnov tests better than the Gaussian generator, making it more efficient for use in cryptographic systems.
IRJET- Embedding Randomness into Symmetric Key Encryption using Genetic Algor...IRJET Journal
This document proposes a method for improving the security of symmetric key encryption by introducing randomness into the encryption process using a genetic algorithm. The genetic algorithm is used to generate random keys that change continuously. This results in different ciphertexts being generated for the same plaintext and key each time the algorithm is run. Standard attacks like brute force are made more difficult by this randomness. The document describes how the genetic algorithm incorporates mutation and crossover to generate random keys. It then explains a three stage encryption process where the plaintext is encrypted using a randomly generated key from the genetic algorithm in the first stage. The random key is embedded within the ciphertext to allow decryption. The method aims to improve security for applications like IoT without significantly increasing computational complexity.
Application of Game Theory to Select the Most Suitable Cryptographic Algorith...AJSERJournal
The cryptographic systems used in an organization use a fixed cryptographic algorithm and the specific
procedures of that system. Due to the fact that the algorithm is fixed in these systems, the probability of failure or
success of such systems depends on human resources, hardware resources and work environment so that it can be said
that the probability of success or the failure of these systems is 50%. Also, in this kind of systems, there are no other
algorithms based on the needs of the user. This research addresses the question of how we can use multiple
asymmetric algorithms in a cryptographic system that does not defeat the algorithm by the opponent. In this research,
selection of algorithms based on some environmental parameters and the possibility of breaking the algorithm by the
opponent should be selected. This will be done using game theory. The problem is modeled as a model of solvable
problems by game theory and generated outputs will be use by Gambit software which is especial for Game theory. The
results obtained from this study indicate the ease of choosing the algorithm based on the need and with regard to the
attack on the opponent and how to reduce the likelihood of breaking the algorithm
Security evaluations of electronic cash (e-cash) schemes usually produce an abstract result in
the form of a logical proof. This paper proposes a new method of security evaluation that produces a
quantitative result. The evaluation is done by analyzing the protocol in the scheme using the Markov chain
technique. This method calculates the probability of an attack that could be executed perfectly in the
scheme’s protocol. As proof of the effectiveness of our evaluation method, we evaluated the security of
Chaum’s untraceable electronic cash scheme. The result of our evaluation was compared to the evaluation
result from the pi-calculus method. Both methods produced comparable results; and thus, both could be
used as alternative methods for evaluating e-cash security.
This tool analyzes the secrecy and performance of symmetric key algorithms. It calculates secrecy based on Shannon's theories of cipher secrecy, giving a numerical value to represent secrecy level, with higher values indicating higher secrecy. It also calculates encryption time to evaluate performance. The tool was tested on common algorithms like AES, 3DES, DES, RC4, RC2. It reliably sorted the algorithms by secrecy and performance, consistent with established understandings. The tool is intended for researchers and engineers to evaluate new symmetric key algorithms by extending the code.
A Review on Various Methods of Cryptography for Cyber Securityrahulmonikasharma
In the today’s world of digital communication networks, the privacy and security of the transmitted data has become a basic necessity for communication. Data Security is the science and study of techniques of securing data in computer and communication systems from unknown users, disclosures and modifications. Cyber security issues plays a vital role in moving towards digital information age. Therefore, the encryption and decryption systems have been implemented for protecting information. The internet users are rapidly increasing day by day which causes a lot of cyber-criminals. The security of not only the single system but the entire systems will be ensured by the task of network security and controlled by network administrator. In this paper, an attempt has been made to review the various methods of Cryptography and how these methods will help to secure data from unauthenticated users. This paper has primarily focused on Cyber Security and Cryptographic concepts. This paper has also discusses the various attacks and cryptographic algorithms that are used in various applications of cyber security.
Design and Implementation of New Encryption algorithm to Enhance Performance...IOSR Journals
This document summarizes a research paper that proposes a new encryption algorithm to improve performance parameters. The algorithm is divided into two phases. Phase 1 involves reversing, swapping, circularly shifting bits of the plaintext and XORing with the key. Phase 2 divides the output into blocks, then recombines the left bits of each block. The paper analyzes avalanche effect and execution time of the proposed algorithm compared to existing algorithms to evaluate its performance. The results show better performance than existing algorithms.
This document is a project report submitted by four students - Anil Shrestha, Bijay Sahani, Bimal Shrestha, and Deshbhakta Khanal - to the Department of Electronics and Computer Engineering at Tribhuvan University in partial fulfillment of the requirements for a Bachelor's degree in Computer Engineering. The report details the development of a web application called "Tweezer" to perform sentiment analysis on tweets in order to determine public sentiment towards various products, services, or personalities. Literature on previous work related to sentiment analysis, especially on social media data like tweets, is also reviewed in the report.
IRJET- Embedding Randomness into Symmetric Key Encryption using Genetic Algor...IRJET Journal
This document proposes a method for improving the security of symmetric key encryption by introducing randomness into the encryption process using a genetic algorithm. The genetic algorithm is used to generate random keys that change continuously. This results in different ciphertexts being generated for the same plaintext and key each time the algorithm is run. Standard attacks like brute force are made more difficult by this randomness. The document describes how the genetic algorithm incorporates mutation and crossover to generate random keys. It then explains a three stage encryption process where the plaintext is encrypted using a randomly generated key from the genetic algorithm in the first stage. The random key is embedded within the ciphertext to allow decryption. The method aims to improve security for applications like IoT without significantly increasing computational complexity.
Application of Game Theory to Select the Most Suitable Cryptographic Algorith...AJSERJournal
The cryptographic systems used in an organization use a fixed cryptographic algorithm and the specific
procedures of that system. Due to the fact that the algorithm is fixed in these systems, the probability of failure or
success of such systems depends on human resources, hardware resources and work environment so that it can be said
that the probability of success or the failure of these systems is 50%. Also, in this kind of systems, there are no other
algorithms based on the needs of the user. This research addresses the question of how we can use multiple
asymmetric algorithms in a cryptographic system that does not defeat the algorithm by the opponent. In this research,
selection of algorithms based on some environmental parameters and the possibility of breaking the algorithm by the
opponent should be selected. This will be done using game theory. The problem is modeled as a model of solvable
problems by game theory and generated outputs will be use by Gambit software which is especial for Game theory. The
results obtained from this study indicate the ease of choosing the algorithm based on the need and with regard to the
attack on the opponent and how to reduce the likelihood of breaking the algorithm
Security evaluations of electronic cash (e-cash) schemes usually produce an abstract result in
the form of a logical proof. This paper proposes a new method of security evaluation that produces a
quantitative result. The evaluation is done by analyzing the protocol in the scheme using the Markov chain
technique. This method calculates the probability of an attack that could be executed perfectly in the
scheme’s protocol. As proof of the effectiveness of our evaluation method, we evaluated the security of
Chaum’s untraceable electronic cash scheme. The result of our evaluation was compared to the evaluation
result from the pi-calculus method. Both methods produced comparable results; and thus, both could be
used as alternative methods for evaluating e-cash security.
This tool analyzes the secrecy and performance of symmetric key algorithms. It calculates secrecy based on Shannon's theories of cipher secrecy, giving a numerical value to represent secrecy level, with higher values indicating higher secrecy. It also calculates encryption time to evaluate performance. The tool was tested on common algorithms like AES, 3DES, DES, RC4, RC2. It reliably sorted the algorithms by secrecy and performance, consistent with established understandings. The tool is intended for researchers and engineers to evaluate new symmetric key algorithms by extending the code.
A Review on Various Methods of Cryptography for Cyber Securityrahulmonikasharma
In the today’s world of digital communication networks, the privacy and security of the transmitted data has become a basic necessity for communication. Data Security is the science and study of techniques of securing data in computer and communication systems from unknown users, disclosures and modifications. Cyber security issues plays a vital role in moving towards digital information age. Therefore, the encryption and decryption systems have been implemented for protecting information. The internet users are rapidly increasing day by day which causes a lot of cyber-criminals. The security of not only the single system but the entire systems will be ensured by the task of network security and controlled by network administrator. In this paper, an attempt has been made to review the various methods of Cryptography and how these methods will help to secure data from unauthenticated users. This paper has primarily focused on Cyber Security and Cryptographic concepts. This paper has also discusses the various attacks and cryptographic algorithms that are used in various applications of cyber security.
Design and Implementation of New Encryption algorithm to Enhance Performance...IOSR Journals
This document summarizes a research paper that proposes a new encryption algorithm to improve performance parameters. The algorithm is divided into two phases. Phase 1 involves reversing, swapping, circularly shifting bits of the plaintext and XORing with the key. Phase 2 divides the output into blocks, then recombines the left bits of each block. The paper analyzes avalanche effect and execution time of the proposed algorithm compared to existing algorithms to evaluate its performance. The results show better performance than existing algorithms.
This document is a project report submitted by four students - Anil Shrestha, Bijay Sahani, Bimal Shrestha, and Deshbhakta Khanal - to the Department of Electronics and Computer Engineering at Tribhuvan University in partial fulfillment of the requirements for a Bachelor's degree in Computer Engineering. The report details the development of a web application called "Tweezer" to perform sentiment analysis on tweets in order to determine public sentiment towards various products, services, or personalities. Literature on previous work related to sentiment analysis, especially on social media data like tweets, is also reviewed in the report.
IRJET- Analysis and Detection of E-Mail Phishing using PysparkIRJET Journal
This document proposes a method to analyze and detect phishing emails using PySpark. It involves using text analysis and link analysis techniques such as applying a Naive Bayes classifier to email text and checking links using the VirusTotal API. The method aims to provide more accurate phishing detection compared to existing approaches. It discusses related work on phishing email detection using machine learning and analyzes the proposed system's design and implementation steps, which include data preprocessing, feature extraction, classification using Naive Bayes, and link analysis.
PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERA...paperpublications3
Abstract: Nowadays verifying the result of the remote computation plays a crucial role in addressing in issue of trust. The outsourced data collection comes for multiple data sources to diagnose the originator of errors by allotting each data sources a unique secrete key which requires the inner product conformation to be performed under any two parties different keys. The proposed methods outperform AISM technique to minimize the running time. The multi-key setting is given different secrete keys, multiple data sources can be upload their data streams along with their respective verifiable homomorphic tag. The AISM consist of three novel join techniques depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. The client should allow choosing any portion in the data streams for queries. The communication between the client and server is independent of input size. The inner product evaluation can be performed by any two sources and the result can be verified by using the particular tag.
Keywords: Computation of outsourcing, Data Stream, Multiple Key, Homomorphic encryption.
Title: PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERATED MULTIPLE KEYS
Author: C. NISHA MALAR, M. S. BONSHIA BINU
ISSN 2350-1049
International Journal of Recent Research in Interdisciplinary Sciences (IJRRIS)
Paper Publications
This document summarizes security definitions for searchable symmetric encryption (SSE) schemes. It reviews the indistinguishability and semantic security game definitions, noting that attacks have succeeded against published schemes. It then proposes a new security game definition against distribution-based query recovery attacks, to better capture practical adversary capabilities. The goal is to define security in a way that implies the current indistinguishability and semantic security definitions.
Adaptive key generation algorithm based on software engineering methodology IJECEIAES
The document presents an adaptive key generation algorithm based on software engineering techniques. The algorithm uses self-checking processes to detect faults in generated keys and ensure randomness. It generates 128-bit keys using a shift register and SIGABA technique. Keys are tested against NIST standards and regenerated if they fail. Case studies show the algorithm effectively produces random keys that pass the tests. The algorithm adopts software engineering processes to reliably generate secure keys resistant to attacks.
This document summarizes a dissertation submitted for the degree of Bachelor of Technology in Computer Science and Engineering. The dissertation analyzes sentiment of mobile reviews using supervised learning methods like Naive Bayes, Bag of Words, and Support Vector Machine. Five students conducted the research under the guidance of an internal guide. The document includes sections on introduction, literature survey of models used, system analysis and design including software and hardware requirements, implementation details, testing strategies and results. Screenshots of the three supervised learning methods are also provided.
This document discusses using sentiment analysis to predict project performance by analyzing language in project reports and communications. It proposes focusing the analysis on select correspondence between key project members, periodic structured reports containing issues/risks, and narrative management reports. Conducting a narrow sentiment analysis of reliable, high-confidence data sources from within the project domain can improve predictive capabilities over broad analyses by increasing the signal-to-noise ratio and computational efficiency. The meaning of words can depend on context, so sentiment analysis may need to consider the applicable contexts more narrowly when including a broader range of project text.
IRJET- Cancelable Biometric based Key Generation for Symmetric Cryptography: ...IRJET Journal
This document summarizes various methods for generating cryptographic keys from biometric traits. It discusses generating keys from fingerprints, iris images, and faces. For fingerprints, minutiae points are extracted from images and used to generate binary strings as keys. For iris images, Scale-Invariant Feature Transform is used to extract features, which are then encrypted using a bio-chaotic algorithm and quantum cryptography to generate keys. Face biometrics employs an entropy-based extraction and Reed-Solomon error correction to generate deterministic bit sequences as keys of suitable length for AES encryption. The document concludes that incorporating biometrics with cryptography provides stronger security by linking keys to users' biometric traits, eliminating the need to store or remember keys separately.
A Probabilistic Approach towards the Prevention of Error Propagation Effect o...IDES Editor
Error propagation effect of Advanced Encryption
Standard (AES) is a great research challenge. AES suffers
from a major limitation of Error propagation effect. In
literature, several studies have been made on this issue and
several techniques are suggested to tackle the effect. To tackle
this limitation, two methods are available. One is Redundancy
Based Technique and the other one is Bite Based Parity
Technique. The first one has a significant advantage of
correcting any error on definite term over the second one but
at the cost of higher level of overhead and hence lowering the
processing speed. In this paper we have proposed a probabilistic
technique to combat the Error Propagation Effect, which
definitely guarantees a secured communication.
Nowadays data analysis centers have a vital role in producing results that are beneficial for the society such as awareness about new disease outbreaks, the geographical areas affected by that disease, which aged people is mostly infected by that disease etc. The approach for protecting individual’s privacy from attackers are well known as anonymization. The word anonymization in this context is hiding the information in such a way that illegitimate user should not be able to infer anything while legitimate user such as an analyzer should get sufficient information from it. That is the anonymization is stated in terms of security and information loss. There are different techniques used for anonymization. In this review, different anonymization techniques and their disadvantages are discussed. The main motto of all such anonymization is low information loss and better security. Although providing 100 percent security and 100 percent data utility is not possible for any systems as anyone of them compromises accordingly. All the techniques are based on concepts.
Randomness evaluation framework of cryptographic algorithmsijcisjournal
Nowadays, computer systems are developing very rapidly and become more and more complex, which
leads to the necessity to provide security for them. This paper is intended to present software for testing
and evaluating cryptographic algorithms. When evaluating block and stream ciphers one of the most basic
property expected from them is to pass statistical randomness testing, demonstrating in this way their
suitability to be random number generators. The primary goal of this paper is to propose a new framework
to evaluate the randomness of cryptographic algorithms: based only on a .dll file which offers access to the
encryption function, the decryption function and the key schedule function of the cipher that has to be tested
(block cipher or stream cipher), the application evaluates the randomness and provides an interpretation of
the results. For this, all nine tests used for evaluation of AES candidate block ciphers and three NIST
statistical tests are applied to the algorithm being tested. In this paper, we have evaluated Tiny Encryption
Algorithm (block cipher), Camellia (block cipher) and LEX (stream cipher) to determine if they pass
statistical randomness testing.
A model to find the agent who responsible for data leakageeSAT Journals
Abstract In this research paper, we implement the model to find the agent who responsible for data leakage system. The data leakage is one type of risk. Many times distributor sends some important data to two or more agents, but several times the information is disclosure and found unauthorized place or unauthorized person. The multiple ways to distributed important information i.e. e-mail, web site, FTP, databases, disk, spreadsheet etc. Due to this purpose information accessing in a safe way is become new topic of research and it became a contestant part to finding leakages. In this work we implement a system for distributing information to agents. In this method we add fake object to the distributed original data to the agent in such a way that improves the changes to finding a leakage. If agent sends this sensitive data to unauthorized person then distributor can receive one data leaked SMS, after that distributor can find the guilty agent who leaked the data. Keywords: Significant data, fake data, guilty agent.
This document presents a model for detecting the agent responsible for data leakage. It discusses adding fake objects to distributed data in order to identify the source if a leakage occurs. The model is implemented using C# and SQL Server. When an agent requests data, the distributor sends the original data along with randomly allocated fake objects. If the data is leaked, the distributor can analyze the fake objects to determine the guilty agent. An algorithm is provided and screenshots show modules for login, data sharing, and detecting the guilty agent using a probability calculation. The model aims to overcome limitations of existing watermarking techniques for data leakage detection.
Symmetric-Key Based Privacy-Preserving Scheme For Mining Support Countsacijjournal
In this paper we study the problem of mining support counts using symmetric-key crypto which is more
efficient than previous work. Consider a scenario that each user has an option (like or unlike) of the
specified product, and a third party wants to obtain the popularity of this product. We design a much more
efficient privacy-preserving scheme for users to prevent the loss of the personal interests. Unlike most
previous works, we do not use any exponential or modular algorithms, but we provide a symmetric-key
based method which can also protect the information. Specifically, our protocol uses a third party that
generates a number of matrixes as each user’s key. Then user uses these key to encrypt their data which is
more efficient to obtain the support counts of a given pattern.
This document provides a summary of a systematic review of authentication techniques for smart grids. The review analyzed 27 papers on smart grid authentication approaches and their effectiveness in mitigating certain attacks. The review found that password-based authentication is not optimal for smart grids as it does not provide mutual authentication. Certificate-less authentication was identified as an appropriate approach. Time-valid one-time signature schemes were also analyzed and found to be a theoretically optimal solution, but more research is needed. The review aims to identify optimized authentication solutions for smart grid components and analyze their effectiveness against different attack types to inform the design of improved authentication approaches.
This document summarizes a research paper on identifying authorized users based on typing speed comparison. The paper proposes using a user's typing speed and patterns as a behavioral biometric for authentication. It analyzes keystroke dynamics data such as dwell times and flight times between keys. A neural network classifier is used to model users' typing behaviors based on monograph and digraph mappings. The proposed framework achieved reduced false positive and negative rates compared to existing password-based authentication methods. It provides a simple, low-cost way to increase computer security without additional hardware or training for users.
This document summarizes research on sentiment analysis of Twitter data. It discusses how sentiment analysis can classify tweets as positive, negative, or neutral. It reviews different techniques for sentiment analysis, including machine learning approaches like Naive Bayes classifiers and lexicon-based approaches. The document also describes prior studies that have used sentiment analysis techniques to predict security attacks based on Twitter sentiment and explore improvements in classification accuracy. In general, the document outlines common methods for analyzing sentiment in social media data and highlights past applications of the analysis.
Authentication techniques in smart grid: a systematic reviewTELKOMNIKA JOURNAL
Smart Grid (SG) provides enhancement to existing grids with two-way communication between the utility, sensors, and consumers, by deploying smart sensors to monitor and manage power consumption. However due to the vulnerability of SG, secure component authenticity necessitates robust authentication approaches relative to limited resource availability (i.e. in terms of memory and computational power). SG communication entails optimum efficiency of authentication approaches to avoid any extraneous burden. This systematic review analyses 27 papers on SG authentication techniques and their effectiveness in mitigating certain attacks. This provides a basis for the design and use of optimized SG authentication approaches.
A novel signature based traffic classification engine to reduce false alarms ...IJCNCJournal
Pattern matching plays a significant role in ascertaining network attacks and the foremost prerequisite for a trusted intrusion detection system (IDS) is accurate pattern matching. During the pattern matching process packets are scanned against a pre-defined rule sets. After getting scanned, the packets are marked as alert or benign by the detection system. Sometimes the detection system generates false alarms i.e., good traffic being identified as bad traffic. The ratio of generating the false positives varies from the performance of the detection engines used to scan incoming packets. Intrusion detection systems use to deploy algorithmic procedures to reduce false positives though producing a good number of false alarms. As the necessities, we have been working on the optimization of the algorithms and procedures so that false positives can be reduced to a great extent. As an effort we have proposed a signature-based traffic classification technique that can categorize the incoming packets based on the traffic characteristics and behaviour which would eventually reduce the rate of false alarms
THE METHOD OF DETECTING ONLINE PASSWORD ATTACKS BASED ON HIGH-LEVEL PROTOCOL ...IJCNCJournal
Although there have been many solutions applied, the safety challenges related to the password security mechanism are not reduced. The reason for this is that while the means and tools to support password attacks are becoming more and more abundant, the number of transaction systems through the Internet is increasing, and new services systems appear. For example, IoT also uses password-based authentication.
In this context, consolidating password-based authentication mechanisms is critical, but monitoring measures for timely detection of attacks also play an important role in this battle. The password attack detection solutions being used need to be supplemented and improved to meet the new situation. In this
paper we propose a solution that automatically detects online password attacks in a way that is based solely on the network, using unsupervised learning techniques and protected application orientation. Our solution therefore minimizes dependence on the factors encountered by host-based or supervised learning solutions. The certainty of the solution comes from using the results of in-depth analysis of attack
characteristics to build the detection capacity of the mechanism. The solution was implemented experimentally on the real system and gave positive results.
A new hybrid text encryption approach over mobile ad hoc network IJECEIAES
This document summarizes a research paper that proposes a new hybrid text encryption approach combining elliptic curve cryptography and the Hill cipher algorithm for use on mobile ad hoc networks. The approach aims to address security weaknesses in the Hill cipher by converting it from a symmetric to an asymmetric technique. It generates public and private keys using elliptic curve cryptography so the secret key does not need to be shared over unsecured channels. The approach also allows direct encryption and decryption of characters from the full 128-character ASCII table using their numeric values, avoiding the need for a character mapping table. The advantages are seen as improved security, efficiency and faster computation compared to other techniques.
IRJET- Enhanced Security using Genetic Algorithm in CryptographyIRJET Journal
This document proposes using a genetic algorithm to generate encryption keys for improved security. It discusses how genetic algorithms mimic natural selection to solve optimization problems. The proposed method would generate encryption keys through genetic operations like crossover and mutation on randomly generated numbers. Data would then be encrypted by diffusing it with genetic operations and combining it with the genetic key. The method is said to provide stronger, more efficient keys compared to traditional cryptography, though it may be less efficient than other algorithms. A comparative study between this proposed genetic algorithm method and other cryptography algorithms is suggested.
Good cryptography requires good random numbers. This paper evaluates the hardwarebased Intel Random Number Generator (RNG) for use in cryptographic applications.
Almost all cryptographic protocols require the generation and use of secret values that must be unknown to attackers. For example, random number generators are required to generate public/private keypairs for asymmetric (public key) algorithms including RSA, DSA, and Diffie-Hellman. Keys for symmetric and hybrid cryptosystems are also generated randomly. RNGs are also used to create challenges, nonces (salts), padding bytes, and blinding values. The one time pad – the only provably-secure encryption system – uses as much key material as ciphertext and requires that the keystream be generated from a truly random process.
IRJET- Analysis and Detection of E-Mail Phishing using PysparkIRJET Journal
This document proposes a method to analyze and detect phishing emails using PySpark. It involves using text analysis and link analysis techniques such as applying a Naive Bayes classifier to email text and checking links using the VirusTotal API. The method aims to provide more accurate phishing detection compared to existing approaches. It discusses related work on phishing email detection using machine learning and analyzes the proposed system's design and implementation steps, which include data preprocessing, feature extraction, classification using Naive Bayes, and link analysis.
PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERA...paperpublications3
Abstract: Nowadays verifying the result of the remote computation plays a crucial role in addressing in issue of trust. The outsourced data collection comes for multiple data sources to diagnose the originator of errors by allotting each data sources a unique secrete key which requires the inner product conformation to be performed under any two parties different keys. The proposed methods outperform AISM technique to minimize the running time. The multi-key setting is given different secrete keys, multiple data sources can be upload their data streams along with their respective verifiable homomorphic tag. The AISM consist of three novel join techniques depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. The client should allow choosing any portion in the data streams for queries. The communication between the client and server is independent of input size. The inner product evaluation can be performed by any two sources and the result can be verified by using the particular tag.
Keywords: Computation of outsourcing, Data Stream, Multiple Key, Homomorphic encryption.
Title: PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERATED MULTIPLE KEYS
Author: C. NISHA MALAR, M. S. BONSHIA BINU
ISSN 2350-1049
International Journal of Recent Research in Interdisciplinary Sciences (IJRRIS)
Paper Publications
This document summarizes security definitions for searchable symmetric encryption (SSE) schemes. It reviews the indistinguishability and semantic security game definitions, noting that attacks have succeeded against published schemes. It then proposes a new security game definition against distribution-based query recovery attacks, to better capture practical adversary capabilities. The goal is to define security in a way that implies the current indistinguishability and semantic security definitions.
Adaptive key generation algorithm based on software engineering methodology IJECEIAES
The document presents an adaptive key generation algorithm based on software engineering techniques. The algorithm uses self-checking processes to detect faults in generated keys and ensure randomness. It generates 128-bit keys using a shift register and SIGABA technique. Keys are tested against NIST standards and regenerated if they fail. Case studies show the algorithm effectively produces random keys that pass the tests. The algorithm adopts software engineering processes to reliably generate secure keys resistant to attacks.
This document summarizes a dissertation submitted for the degree of Bachelor of Technology in Computer Science and Engineering. The dissertation analyzes sentiment of mobile reviews using supervised learning methods like Naive Bayes, Bag of Words, and Support Vector Machine. Five students conducted the research under the guidance of an internal guide. The document includes sections on introduction, literature survey of models used, system analysis and design including software and hardware requirements, implementation details, testing strategies and results. Screenshots of the three supervised learning methods are also provided.
This document discusses using sentiment analysis to predict project performance by analyzing language in project reports and communications. It proposes focusing the analysis on select correspondence between key project members, periodic structured reports containing issues/risks, and narrative management reports. Conducting a narrow sentiment analysis of reliable, high-confidence data sources from within the project domain can improve predictive capabilities over broad analyses by increasing the signal-to-noise ratio and computational efficiency. The meaning of words can depend on context, so sentiment analysis may need to consider the applicable contexts more narrowly when including a broader range of project text.
IRJET- Cancelable Biometric based Key Generation for Symmetric Cryptography: ...IRJET Journal
This document summarizes various methods for generating cryptographic keys from biometric traits. It discusses generating keys from fingerprints, iris images, and faces. For fingerprints, minutiae points are extracted from images and used to generate binary strings as keys. For iris images, Scale-Invariant Feature Transform is used to extract features, which are then encrypted using a bio-chaotic algorithm and quantum cryptography to generate keys. Face biometrics employs an entropy-based extraction and Reed-Solomon error correction to generate deterministic bit sequences as keys of suitable length for AES encryption. The document concludes that incorporating biometrics with cryptography provides stronger security by linking keys to users' biometric traits, eliminating the need to store or remember keys separately.
A Probabilistic Approach towards the Prevention of Error Propagation Effect o...IDES Editor
Error propagation effect of Advanced Encryption
Standard (AES) is a great research challenge. AES suffers
from a major limitation of Error propagation effect. In
literature, several studies have been made on this issue and
several techniques are suggested to tackle the effect. To tackle
this limitation, two methods are available. One is Redundancy
Based Technique and the other one is Bite Based Parity
Technique. The first one has a significant advantage of
correcting any error on definite term over the second one but
at the cost of higher level of overhead and hence lowering the
processing speed. In this paper we have proposed a probabilistic
technique to combat the Error Propagation Effect, which
definitely guarantees a secured communication.
Nowadays data analysis centers have a vital role in producing results that are beneficial for the society such as awareness about new disease outbreaks, the geographical areas affected by that disease, which aged people is mostly infected by that disease etc. The approach for protecting individual’s privacy from attackers are well known as anonymization. The word anonymization in this context is hiding the information in such a way that illegitimate user should not be able to infer anything while legitimate user such as an analyzer should get sufficient information from it. That is the anonymization is stated in terms of security and information loss. There are different techniques used for anonymization. In this review, different anonymization techniques and their disadvantages are discussed. The main motto of all such anonymization is low information loss and better security. Although providing 100 percent security and 100 percent data utility is not possible for any systems as anyone of them compromises accordingly. All the techniques are based on concepts.
Randomness evaluation framework of cryptographic algorithmsijcisjournal
Nowadays, computer systems are developing very rapidly and become more and more complex, which
leads to the necessity to provide security for them. This paper is intended to present software for testing
and evaluating cryptographic algorithms. When evaluating block and stream ciphers one of the most basic
property expected from them is to pass statistical randomness testing, demonstrating in this way their
suitability to be random number generators. The primary goal of this paper is to propose a new framework
to evaluate the randomness of cryptographic algorithms: based only on a .dll file which offers access to the
encryption function, the decryption function and the key schedule function of the cipher that has to be tested
(block cipher or stream cipher), the application evaluates the randomness and provides an interpretation of
the results. For this, all nine tests used for evaluation of AES candidate block ciphers and three NIST
statistical tests are applied to the algorithm being tested. In this paper, we have evaluated Tiny Encryption
Algorithm (block cipher), Camellia (block cipher) and LEX (stream cipher) to determine if they pass
statistical randomness testing.
A model to find the agent who responsible for data leakageeSAT Journals
Abstract In this research paper, we implement the model to find the agent who responsible for data leakage system. The data leakage is one type of risk. Many times distributor sends some important data to two or more agents, but several times the information is disclosure and found unauthorized place or unauthorized person. The multiple ways to distributed important information i.e. e-mail, web site, FTP, databases, disk, spreadsheet etc. Due to this purpose information accessing in a safe way is become new topic of research and it became a contestant part to finding leakages. In this work we implement a system for distributing information to agents. In this method we add fake object to the distributed original data to the agent in such a way that improves the changes to finding a leakage. If agent sends this sensitive data to unauthorized person then distributor can receive one data leaked SMS, after that distributor can find the guilty agent who leaked the data. Keywords: Significant data, fake data, guilty agent.
This document presents a model for detecting the agent responsible for data leakage. It discusses adding fake objects to distributed data in order to identify the source if a leakage occurs. The model is implemented using C# and SQL Server. When an agent requests data, the distributor sends the original data along with randomly allocated fake objects. If the data is leaked, the distributor can analyze the fake objects to determine the guilty agent. An algorithm is provided and screenshots show modules for login, data sharing, and detecting the guilty agent using a probability calculation. The model aims to overcome limitations of existing watermarking techniques for data leakage detection.
Symmetric-Key Based Privacy-Preserving Scheme For Mining Support Countsacijjournal
In this paper we study the problem of mining support counts using symmetric-key crypto which is more
efficient than previous work. Consider a scenario that each user has an option (like or unlike) of the
specified product, and a third party wants to obtain the popularity of this product. We design a much more
efficient privacy-preserving scheme for users to prevent the loss of the personal interests. Unlike most
previous works, we do not use any exponential or modular algorithms, but we provide a symmetric-key
based method which can also protect the information. Specifically, our protocol uses a third party that
generates a number of matrixes as each user’s key. Then user uses these key to encrypt their data which is
more efficient to obtain the support counts of a given pattern.
This document provides a summary of a systematic review of authentication techniques for smart grids. The review analyzed 27 papers on smart grid authentication approaches and their effectiveness in mitigating certain attacks. The review found that password-based authentication is not optimal for smart grids as it does not provide mutual authentication. Certificate-less authentication was identified as an appropriate approach. Time-valid one-time signature schemes were also analyzed and found to be a theoretically optimal solution, but more research is needed. The review aims to identify optimized authentication solutions for smart grid components and analyze their effectiveness against different attack types to inform the design of improved authentication approaches.
This document summarizes a research paper on identifying authorized users based on typing speed comparison. The paper proposes using a user's typing speed and patterns as a behavioral biometric for authentication. It analyzes keystroke dynamics data such as dwell times and flight times between keys. A neural network classifier is used to model users' typing behaviors based on monograph and digraph mappings. The proposed framework achieved reduced false positive and negative rates compared to existing password-based authentication methods. It provides a simple, low-cost way to increase computer security without additional hardware or training for users.
This document summarizes research on sentiment analysis of Twitter data. It discusses how sentiment analysis can classify tweets as positive, negative, or neutral. It reviews different techniques for sentiment analysis, including machine learning approaches like Naive Bayes classifiers and lexicon-based approaches. The document also describes prior studies that have used sentiment analysis techniques to predict security attacks based on Twitter sentiment and explore improvements in classification accuracy. In general, the document outlines common methods for analyzing sentiment in social media data and highlights past applications of the analysis.
Authentication techniques in smart grid: a systematic reviewTELKOMNIKA JOURNAL
Smart Grid (SG) provides enhancement to existing grids with two-way communication between the utility, sensors, and consumers, by deploying smart sensors to monitor and manage power consumption. However due to the vulnerability of SG, secure component authenticity necessitates robust authentication approaches relative to limited resource availability (i.e. in terms of memory and computational power). SG communication entails optimum efficiency of authentication approaches to avoid any extraneous burden. This systematic review analyses 27 papers on SG authentication techniques and their effectiveness in mitigating certain attacks. This provides a basis for the design and use of optimized SG authentication approaches.
A novel signature based traffic classification engine to reduce false alarms ...IJCNCJournal
Pattern matching plays a significant role in ascertaining network attacks and the foremost prerequisite for a trusted intrusion detection system (IDS) is accurate pattern matching. During the pattern matching process packets are scanned against a pre-defined rule sets. After getting scanned, the packets are marked as alert or benign by the detection system. Sometimes the detection system generates false alarms i.e., good traffic being identified as bad traffic. The ratio of generating the false positives varies from the performance of the detection engines used to scan incoming packets. Intrusion detection systems use to deploy algorithmic procedures to reduce false positives though producing a good number of false alarms. As the necessities, we have been working on the optimization of the algorithms and procedures so that false positives can be reduced to a great extent. As an effort we have proposed a signature-based traffic classification technique that can categorize the incoming packets based on the traffic characteristics and behaviour which would eventually reduce the rate of false alarms
THE METHOD OF DETECTING ONLINE PASSWORD ATTACKS BASED ON HIGH-LEVEL PROTOCOL ...IJCNCJournal
Although there have been many solutions applied, the safety challenges related to the password security mechanism are not reduced. The reason for this is that while the means and tools to support password attacks are becoming more and more abundant, the number of transaction systems through the Internet is increasing, and new services systems appear. For example, IoT also uses password-based authentication.
In this context, consolidating password-based authentication mechanisms is critical, but monitoring measures for timely detection of attacks also play an important role in this battle. The password attack detection solutions being used need to be supplemented and improved to meet the new situation. In this
paper we propose a solution that automatically detects online password attacks in a way that is based solely on the network, using unsupervised learning techniques and protected application orientation. Our solution therefore minimizes dependence on the factors encountered by host-based or supervised learning solutions. The certainty of the solution comes from using the results of in-depth analysis of attack
characteristics to build the detection capacity of the mechanism. The solution was implemented experimentally on the real system and gave positive results.
A new hybrid text encryption approach over mobile ad hoc network IJECEIAES
This document summarizes a research paper that proposes a new hybrid text encryption approach combining elliptic curve cryptography and the Hill cipher algorithm for use on mobile ad hoc networks. The approach aims to address security weaknesses in the Hill cipher by converting it from a symmetric to an asymmetric technique. It generates public and private keys using elliptic curve cryptography so the secret key does not need to be shared over unsecured channels. The approach also allows direct encryption and decryption of characters from the full 128-character ASCII table using their numeric values, avoiding the need for a character mapping table. The advantages are seen as improved security, efficiency and faster computation compared to other techniques.
A new hybrid text encryption approach over mobile ad hoc network
Similar to Comparative analysis of efficiency of fibonacci random number generator algorithm and gaussian random number generator algorithm in a cryptographic system.
IRJET- Enhanced Security using Genetic Algorithm in CryptographyIRJET Journal
This document proposes using a genetic algorithm to generate encryption keys for improved security. It discusses how genetic algorithms mimic natural selection to solve optimization problems. The proposed method would generate encryption keys through genetic operations like crossover and mutation on randomly generated numbers. Data would then be encrypted by diffusing it with genetic operations and combining it with the genetic key. The method is said to provide stronger, more efficient keys compared to traditional cryptography, though it may be less efficient than other algorithms. A comparative study between this proposed genetic algorithm method and other cryptography algorithms is suggested.
Good cryptography requires good random numbers. This paper evaluates the hardwarebased Intel Random Number Generator (RNG) for use in cryptographic applications.
Almost all cryptographic protocols require the generation and use of secret values that must be unknown to attackers. For example, random number generators are required to generate public/private keypairs for asymmetric (public key) algorithms including RSA, DSA, and Diffie-Hellman. Keys for symmetric and hybrid cryptosystems are also generated randomly. RNGs are also used to create challenges, nonces (salts), padding bytes, and blinding values. The one time pad – the only provably-secure encryption system – uses as much key material as ciphertext and requires that the keystream be generated from a truly random process.
Enhancement of Error Correction in Quantum Cryptography BB84 ...butest
The document discusses several papers related to enhancing error correction in quantum cryptography protocols, measuring performance of blind source separation algorithms, detecting and counting faces in surveillance images, applying rough sets to web log mining, and using data mining techniques for financial prediction and analyzing RNA/interferon structures of hepatitis C virus.
Working with cryptographic key informationIJECEIAES
It is important to create a cryptographic system such that the encryption system does not depend on the secret storage of the algorithm that is part of it, but only on the private key that is kept secret. In practice, key management is a separate area of cryptography, which is considered a problematic area. This paper describes the main characteristics of working with cryptographic key information. In that, the formation of keys and working with cryptographic key information are stored on external media. The random-number generator for generating random numbers used for cryptographic key generation is elucidated. To initialize the sensor, a source of external entropy, mechanism “Electronic Roulette” (biological random number), is used. The generated random bits were checked on the basis of National Institute of Standards and Technology (NIST) statistical tests. As a result of the survey, the sequence of random bits was obtained from the tests at a value of P ≥ 0.01. The value of P is between 0 and 1, and the closer the value of P is to 1, the more random the sequence of bits is generated. This means that random bits that are generated based on the proposed algorithm can be used in cryptography to generate crypto-resistant keys.
System call frequency analysis-based generative adversarial network model for...IJECEIAES
In today's digital age, mobile applications have become essential in connecting people from diverse domains. They play a crucial role in enabling communication, facilitating business transactions, and providing access to a range of services. Mobile communication is widespread due to its portability and ease of use, with an increasing number of mobile devices projected to reach 18.22 billion by the end of 2025. However, this convenience comes at a cost, as cybercriminals are constantly looking for ways to exploit security vulnerabilities in mobile applications. Among the several varieties of malicious applications, zero-day malware is particularly dangerous since it cannot be removed by antivirus software. To detect zeroday Android malware, this paper introduces a novel approach based on generative adversarial networks (GANs), which generates new frequencies of feature vectors from system calls. In the proposed approach, the generator is fed with a mixture of real samples and noise, and then trained to create new samples, while the discriminator model aims to classify these samples as either real or fake. We assess the performance of our model through different measures, including loss functions, the Frechet Inception distance, and the inception score evaluation metrics.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Genetic algorithm based key generation for fully homomorphic encryptionMajedahAlkharji
This article describes a method to use Genetic Algorithm to generate keys for the fully homomorphic encryption scheme. Then perform some simple computations on the encrypted data.
Results show that a GA generated key provides more randomness than other conventional methods used to generate public and private keys.
I will talk about innovation in the area of cyber security analytics - developing machine learning methods to detect and block cyber attacks (e.g. detecting ransomware within 4 seconds of execution and killing the underlying processes). Rather than just focusing on this as a 'black box', I'll pull it apart and talk about how we can use these methods to enable security practitioners (SOC/CIRT etc) to ask and answer questions about 'what' and 'why' these methods are flagging attacks. I'll also talk about resilience of machine learning methods to manipulation and adversarial attacks - how stable these approaches are to diversity and evolution of malware for example.
PSEUDO RANDOM KEY GENERATOR USING FRACTAL BASED TRELLIS CODED GENETIC ALGORIT...IJNSA Journal
Cryptographic applications such as online banking, and securing medical and military data require the usage of random keys, which should remain unpredictable by adversaries. This paper focuses on the strengths and limitations of the techniques and algorithms that are used in the generation of random keys and a new method to generate random keys is proposed using fractals. Fractals are generated using the Sierpinski triangle and fed as input for Non-Deterministic Finite Automata (NDFA) to generate an Initial Vector (IV). Trellis Coded Genetic Algorithm (TCGA) code generator generates seed value using IV as input. Pseudo-Random Key Generator (PRKG) generates a Session Key matrix (SKM) using a seed value. Images are encrypted using SKM to generate cipher images. The randomness of the TCGA code is tested using entropy measure and efficiency based on NIST Tests. SKM with high entropy value is used for image encryption. The Number of Pixel Change Rate (NPCR) and Unified Average Changing Intensity (UACI) values are used for calculating the randomness of cipher images.
Behavioural biometrics and cognitive security authentication comparison studyacijjournal
Behavioural
biometrics is a scien
tific study with the primary purpose of identifying the authenticity of a
user based on the way they interact with an authentication mechanism. While Association based password
authentication is a cognitive model of authentication system.
The work done shows the implementation of Keyboard Latency technique for Authentication,
implementation of Association Based Password authentication and comparison among two. There are
several forms of behavioural biometrics such as voice analysis, signat
ure verification, and keystroke
dynamics. In this study, evidence is presented indicating that keystroke dynamics is a viable method not
only for user verification, but also for identification as well. The work presented in this model borrows
ideas from th
e bioinformatics literature such as position specific scoring matrices (motifs) and multiple
sequence alignments to provide a novel approach to user verification and identification within the context
of a keystroke dynamics based user authentication system
. Similarly Cognitive approach can be defined in
many ways of which one is association based Technique for authentication
Application of Game Theory to Select the Most Suitable Cryptographic Algorith...AJSERJournal
The cryptographic systems used in an organization use a fixed cryptographic algorithm and the specific
procedures of that system. Due to the fact that the algorithm is fixed in these systems, the probability of failure or
success of such systems depends on human resources, hardware resources and work environment so that it can be said
that the probability of success or the failure of these systems is 50%. Also, in this kind of systems, there are no other
algorithms based on the needs of the user. This research addresses the question of how we can use multiple
asymmetric algorithms in a cryptographic system that does not defeat the algorithm by the opponent. In this research,
selection of algorithms based on some environmental parameters and the possibility of breaking the algorithm by the
opponent should be selected. This will be done using game theory. The problem is modeled as a model of solvable
problems by game theory and generated outputs will be use by Gambit software which is especial for Game theory. The
results obtained from this study indicate the ease of choosing the algorithm based on the need and with regard to the
attack on the opponent and how to reduce the likelihood of breaking the algorithm
This document compares and analyzes various encryption algorithms based on different parameters such as key length, block size, security level, and encryption speed. It finds that Elliptic Curve Cryptography (ECC) and Blowfish algorithms provide the highest levels of security while also having fast encryption speeds. Of the two, Blowfish is considered the best choice as no successful attacks have been made against it so far. Blowfish is also analyzed in more depth, with its structure, security features, and advantages over other algorithms like AES described. References are provided to support the analysis and conclusions.
New enterprise application and data security challenges and solutions apr 2...Ulf Mattsson
Ulf Mattsson presented on new enterprise application and data security challenges and solutions. He discussed how 20% of organizations are expected to budget for quantum computing projects by 2023 compared to less than 1% currently. He also summarized that web application security is needed based on Verizon's 2018 breach report showing many breaches originate from applications. Finally, he emphasized the importance of integrating security into the application development process from the beginning using approaches like SecDevOps and DevSecOps.
SYMMETRIC-KEY BASED PRIVACYPRESERVING SCHEME FOR MINING SUPPORT COUNTSacijjournal
In this paper we study the problem of mining support counts using symmetric-key crypto which is more
efficient than previous work. Consider a scenario that each user has an option (like or unlike) of the
specified product, and a third party wants to obtain the popularity of this product. We design a much more
efficient privacy-preserving scheme for users to prevent the loss of the personal interests. Unlike most
previous works, we do not use any exponential or modular algorithms, but we provide a symmetric-key
based method which can also protect the information. Specifically, our protocol uses a third party that
generates a number of matrixes as each user’s key. Then user uses these key to encrypt their data which is
more efficient to obtain the support counts of a given pattern.
Titles with Abstracts_2023-2024_Data Mining.pdfinfo751436
Data mining projects offer several advantages across various industries. Here are some key benefits:
Knowledge Discovery:
Data mining allows organizations to discover hidden patterns, trends, and relationships within large datasets that may not be immediately apparent. This knowledge can be invaluable for making informed decisions.
Improved Decision Making:
By analyzing historical data, data mining enables better decision-making processes. Businesses can use insights gained from data mining to make strategic decisions, optimize operations, and identify areas for improvement.
Customer Segmentation:
Data mining helps in identifying customer segments based on their behavior, preferences, and purchasing patterns. This allows businesses to tailor their marketing strategies, leading to more targeted and effective campaigns.
Fraud Detection:
In industries such as finance and healthcare, data mining is used for detecting fraudulent activities. Analyzing patterns in transactions or claims data can help identify anomalies that may indicate fraudulent behavior.
Predictive Analysis:
Data mining enables predictive modeling, allowing organizations to forecast future trends and outcomes. This is particularly useful in fields like finance, marketing, and healthcare for predicting stock prices, customer behavior, or disease outbreaks.
Process Optimization:
By analyzing operational data, organizations can identify bottlenecks, inefficiencies, and areas for improvement. This leads to more streamlined and efficient business processes.
Personalization:
Data mining enables businesses to create personalized experiences for customers. This is evident in recommendation systems used by companies like Amazon and Netflix, which analyze user behavior to suggest products or content tailored to individual preferences.
Healthcare Insights:
In healthcare, data mining can be used to analyze patient records, identify disease patterns, and optimize treatment plans. This can contribute to better patient outcomes and more efficient healthcare delivery.
Risk Management:
Industries such as insurance and finance benefit from data mining for risk assessment. By analyzing historical data, organizations can assess and mitigate risks more effectively.
Scientific Discovery:
In scientific research, data mining is used to analyze large datasets generated by experiments. This can lead to the discovery of new patterns, correlations, or insights that may not be apparent through traditional methods.
Competitive Advantage:
Organizations that effectively leverage data mining gain a competitive advantage. The insights derived from data can help businesses stay ahead of market trends and make strategic decisions that give them an edge over competitors.
Cost Savings:
By identifying and addressing inefficiencies, data mining can contribute to cost savings. This is especially important in industries with tight profit margins.
IRJET- Detecting Phishing Websites using Machine LearningIRJET Journal
This document describes a research project that aims to implement machine learning techniques to detect phishing websites. The researchers plan to test algorithms like logistic regression, SVM, decision trees and neural networks on a dataset of phishing links. They will evaluate the performance of these algorithms and develop a browser plugin using the best model. This plugin will detect malicious URLs and protect users from phishing attacks. The document provides background on phishing and outlines the proposed approach, dataset, algorithms to be tested, planned Chrome extension implementation, and expected results sections of the project.
Improving the accuracy of fingerprinting system using multibiometric approachIJERA Editor
Biometric technology is a science that used to verify or identify the individual based on physical and/or
behavioral traits. Although biometric systems are considered more secure than other traditional methods such as
password, or key, they also have many limitations such as noisy image, or spoof attack. One of the solutions to
overcome these limitations, is by applying a multibiometric system. Multibiometric system has a significant
effect in improving the performance of both security and accuracy of the system. It also can alleviate the spoof
attacks and reduce the fail to enroll error. A multi-sample is one implementations of the multibiometric systems.
In this study, a new algorithm is suggested to provide a second chance for the genuine user who is rejected, to
compare his/her provided finger with the other samples of the same finger. Multisampling fingerprint is used to
implement this new algorithm. The algorithm is activated when the match score of the user is not equal to a
threshold but close to it, then the system provides another chance to compare the finger with another sample of
the same trait. Using multi-sample biometric system improved the performance of the system by reducing the
False Reject Rate (FRR). Applying the original matching algorithm on the presented database produced 3
genuine users, and 5 imposters for the same fingerprint. While after implementing the suggested condition, the
system performance is enhanced by producing 6 genuine users, and 2 imposters for the same fingerprint. This
work was built and executed depending on a previous Matlab code presented by Zhi Li Wu. Thresholds and
Receiver Operating Characteristic (ROC) curves computed before and after implementing the suggested
multibiometric algorithm. Both ROC curves compared. A final decision and recommendations are provided
depending on the results obtained from this project
CrAlSim: A Cryptography Algorithm SimulatorIRJET Journal
This document describes CrAlSim, a cryptography algorithm simulator. CrAlSim uses graphical visualizations to demonstrate the step-by-step workings of encryption algorithms like AES and Affine cipher. It displays the encryption process through color-coded matrices and tables to track value changes at each step. The goal is to help students better understand complex algorithms by seeing the internal conversions and calculations that occur during encryption and decryption. CrAlSim is implemented with JavaScript for the encryption functions and HTML/CSS for the interface. It allows users to input plaintext and keys, select an algorithm, and then step through an animated simulation of the algorithm's encryption process.
An Approach on Data Security with the Combination of Symmetric and Asymmetric...AnirbanBhowmik8
Data security is an important issue in modern era. In this paper, we use both symmetric and asymmetric key for encryption and decryption for data security. In our technique, two phases are used. In the 1st phase symmetric key encryption is used and in 2nd phase asymmetric key encryption like RSA is used.
Random Number Generator Using Seven Segment Display In LabviewIJERA Editor
Random number generator [RNG] is use to generate random numbers between any given limit, RNG's are two kinds 1.True random number and 2.pseudom numbers. True random numbers are not predictable by any mathematical formula because they are mainly depends on the atmospheric noise, coming to the pseudo numbers are mainly used in most of computers, this randomness can be predictable by using mathematical formula and it is fine many purposes, but it may not be random in the way you expect if you are used to dice rolls and lottery drawings. In this mini project we are doing RNG [pseudo numbers] by using NI labview software and generating random numbers by pressing push button and coming output we are displaying on seven segment display. In the labview it easy to generate a random number by using different block‟s and main advantage of the labview is there is no need of any programming languages like[c,c++,java,matlab].Main function of this project is used for gaming and priority number generation and etc. A sequence of uniform random numbers, which is generated within the computer in a deterministic manner, is often referred to as a pseudo-random number sequence.[1]
Similar to Comparative analysis of efficiency of fibonacci random number generator algorithm and gaussian random number generator algorithm in a cryptographic system. (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Comparative analysis of efficiency of fibonacci random number generator algorithm and gaussian random number generator algorithm in a cryptographic system.
1. Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol.4, No.10, 2013
www.iiste.org
Comparative Analysis of Efficiency of Fibonacci Random Number
Generator Algorithm and Gaussian Random Number Generator
Algorithm in a Cryptographic System.
Dr. Ing. Edward Opoku-Mensah1, Abilimi A. Christopher2, Francis Ohene Boateng3
1. Department of Information Technology Education, University of Education Winneba, P.O.Box 1277,
Kumasi, Ghana, Email : ingeopm@gmail.com
2. Department of Computer Science, Christian Service University College, P.O. Box 3110, Kumasi,
Ghana, Email: cabilimia@gmail.com
3. Department of Information Technology Education, University of Education Winneba, P.O.Box 1277,
Kumasi, Ghana. Email : fanbotgh@yahoo.com
Abstract
Random Numbers determine the security level of Cryptographic Applications as they are used to generate
padding schemes in the encryption and decryption process as well as used to generate cryptographic keys. The
more randomness in the numbers a generator generates the more effective the cryptographic algorithm, and the
more secured it is to be used for protecting confidential data. Sometimes developers find it difficult to determine
which Random Number Generators (RNGs) can provide a much secured Cryptographic System for secured
enterprise application implementations. Two of such random number generators include the Fibonacci Random
Number Generator and the Gaussian Random Generator. The researchers sought to determine, between these
two, the better to be used for improving data security in cryptographic software systems. The researchers
employed statistical tests like Frequency test, Chi-Square test, Kolmogorov-Smirnov test on the first 100 random
numbers between 0 and 1000 generated using the above generators. The research concluded that Fibonacci
Random Number Generator is more efficient than the Gaussian Random Number Generator and therefore
recommended the choice of Fibonacci Random Number Generator when choosing between the two for use in a
cryptographic system for better data security.
Keywords: Cryptographic Algorithms, Random Number Generators, encryption, decryption
1. Introduction
According to Kessler (2012), in the information security world, it is repeatedly examining statements like
‘protected via 128-bit Advance Encryption Standard’ or ‘sheltered by 2048 bit validation’. Invariably people
would want to know the data security or safety power of various cryptographic algorithms. Some cryptographic
algorithms like Rivest, Shamir Adleman, Elliptic curve cryptography and Advance Encryption Standard contain
a confirmation evidence of not being easy to crack. These cryptographic algorithms are mostly used in protocols
to take care of data that need to be protected for their confidentially, integrity and identity. As an illustration
think about using Secure Socket Layer (SSL) or Transport Layer Security (TLS) while a book is bought from an
online market like the Amazon, or when a sum of money will have to be transferred to your bank account by a
bond (Hasani, 2011). Another example can be seen in how Internet Protocol Security (IPsec) and Internet Key
Exchange (IKE) are used while a computer is connected to a network in order to access information on the
World Wide Web.
AuthenTec Embedded Security Solutions (2010) stated that some of the things that are not often seen are
random number generator strengths and their use by security applications. The most important thing software
designers are concerned with is the controlled use of bits and their creation rate, but not much of the real
randomness associated with the bits produced.
The security level of a cryptographic application is dependent on the importance of the random numbers
used in the application. Furthermore, how intricate it is to crack a cryptographic system is dependent on the
quality of random numbers used in the cryptography (Biege, 2006).
High security cryptographic system can be achieved if researchers can understand cryptographic systems
are developed based on the principle of cryptographic algorithms design called Kerckhoff Principle which states
that “The security of the system must depend solely on the key material, and not on the design of the
system”(AuthenTec Embedded Security Solutions, 2010). This is due to the fact that before an attacker can
break modern cryptographic systems, the attacker needs to deduce the keys used in developing the algorithms as
well as the protocol used. The strength of a cryptographic application is expressed with the assumption that the
attacker has no idea of the bits and its pattern used in the key formation and used in the cryptographic systems.
An enhanced attack against cryptographic system exposes extra key bits that can be computed by looking at the
(inadequate sum of) output data, and that diminishes the ‘valuable strength’ of an algorithm.
50
2. Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol.4, No.10, 2013
www.iiste.org
A manual prepared by the Internet Engineering Task Force (IETF) and named a ‘Best Practices’ document
is used to describe the significance of true randomness in cryptographic applications, and these serve as
guidelines on how random numbers are created (Eastlake, Schiller & Crocker, 2005). The National Institute of
Standard and Technology provides a subdivision on Random Number Creation in their Cryptographic Toolbox
pages, and a number of standards organizations like International Standard Organisation (ISO), Institute of
Electrical and Electronic Engineers (IEEE), IETF, and American National Standards Institute (ANSI), are
working on, standards associated to random number generation. This goes to show the importance of proper
random number generation. This research therefore compares the effectiveness of Pseudo Random Number
Generator so as to get a secured cryptographic system, when implemented in cryptography.
2. Random numbers in Cryptography
A high-quality Cryptographic System needs good-quality random numbers (Mike, Paul, & Mark, 2012). This
paper evaluates software security based on Pseudo Random Number Generator Algorithms (PRNGA) needed for
Cryptographic Applications.
Nearly each and every cryptographic protocol needs the exploit and creation of furtive ethics that
attackers should not know. Algorithms for the generation of random numbers are requisites for generation of
private keys or public key pairs for symmetry and hybrid cryptosystems as well as asymmetric encryption
algorithms like Diffie-Hellman, Rivest Shamir Adleman and Digital Signature Algorithm. Random Number
Generators are considered necessary to generate blinding values, challenges, padding bytes, and nonces (salts) in
cryptographic applications. The randomness of keys used in cryptographic system is the pre-requisite for security
protocol; RNGs used in cryptographic systems ought to convene rigid parameters. Hiding the important patterns
concerning the output of the Random Number Generator from the attackers, as well as folks who know how the
Random Number Generator Algorithm is designed, is very important in every cryptographic application. For
illustration, the clear entropy of the output of Random Numbers Generator must be as close up as probable to the
length of the bit use. The paper compares the efficiency of two different random number generators, Fibonacci
Random Number Generator Algorithm and Gaussian Random Number Generator Algorithm, in Cryptographic
Algorithm to see the better one to be used in cryptography.
The ideal quality of high-quality pseudo random number generator is that the series should not be easy to
predict by the attacker of a cryptosystem. This is because sometimes the adversary may know the cryptographic
algorithm used, and this is what is called the Kerckhoff's principle (Yehuda, 2006).
According to Andrew et al. (2010), an additional reason for statistical randomness of a pseudo random
number generator is the basis for good pseudo random number generators. All correlations and trends in the
generators should be hidden. Statistically these can be proven. The statistical component of a pseudo random
number generator shows that every value or result over a given range has equal chance of incidence. Every
number has a probability of ½ in the range of [0, 1]. Every couple of numbers used will have an occurrence of ¼
and a frequency of 1/8 for every triplet. That is when the basis for the numbers generated is considered not
biased and consecutive output is considered not correlated. The mean and variance computation must also be
included in the statistical randomness test. If the output for the Mean is closer to 0.5 and that of the variance is
closer to 0.08, then the pseudo random numbers are homogeneous.
According to Andrew et al. (2010), another test for the efficacy of pseudo random numbers generators is
the Frequency Test also called the Monobits. The Frequency Test concentrates on the percentage of ones and
zeroes in the entire series of numbers produced by the generators. This test is used to establish that the series of
numbers generated or created produces the same number of ones and zeroes as should be anticipated for any true
random series. That is the assessment of how close the fraction of ones is to ½, which means that, the amount of
zeroes and ones in series must be almost the same when compared to each other. The Monobit Test in a Block
concentrates on the percentage of ones and zeroes in the Blocks of M-bits. This assessment ascertains whether
M/2 is roughly the number of ones in the block of M-bit.
According to Justin (2012), the effectiveness of Pseudo Random Number Generator algorithms also
depends on the period. Every algorithm has input parameters that produce an equivalent period. What happens is
that any time an algorithm for generating the random numbers creates a repeated value then it will start the
generation process all over again in order to ensure that the results are not correlated. Therefore it should be
recognized that the seed or the value or the parameter determines the period of the algorithm used. This means
that the more the repetition, the shorter the period used and vice versa.
Lastly, uniformity also determines the period of the pseudo random number generator. That is every
perfect generator algorithm will produce records that are consistently distributed within a certain interval. This
makes it very difficult for algorithms that have smaller periods, if even they have huge intervals, to pass this kind
of statistical test for randomness (Christophe & Diethelm, 2003). They also explained that if there exist
correlations within the numbers generated, then the algorithms with larger period will experience massive
51
3. Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol.4, No.10, 2013
www.iiste.org
problems.
Christophe and Diethelm (2003), stated that the ideal thing to do to any cryptographic application
systems is to make sure that the generator used in the generation of the random numbers produces values that
correlate serially. When smaller values of numbers are produced via random number generators, then algorithms
that have larger periods and high-uniformity may be highly inconsistent. The security of the generator then
depends on the correlation of values produced, and hence it can be explained that the more the system is secured
the lesser the correlations of numbers in the generator and vice versa.
3. Methodology
The researchers employed statistical tests for testing the randomness of two pseudo random numbers generators,
namely Fibonacci Random Number Generator and Gaussian random number generator algorithms, in order to
determine whether a set of data has a recognizable pattern to it or not. These tests are Chi-Square Test and
Kolmogorov-Smirnov test (KS-test).
3.1 Chi-square Test
The Chi-Square Test has shown to conform to the errors of pseudo random series generators sensitivity. The
distribution of the Chi-Square is a non-negative value and a proportion which show how often real random series
should surpass a computed number or value determined as a result for collections of zeroes and ones in a certain
data file. The procedures used are as follows:
i.
Java Programming codes are written for each of Fibonacci Random Number Generator and Gaussian
Random Number Generator.
ii.
This test involved producing sequences of 100 random integers between 0 and 1000 using the Java
codes for each of Fibonacci Random Number Generator and Gaussian Random Number Generator.
iii.
The generated random numbers in (ii) above is coded in Statistical Package for Social Science (SPSS)
software to test for the randomness of the numbers in the generators. The procedures include:
a. Go to and click on the Analyse Menu on the SPSS bar after the data has been coded.
b. Choose Nonparametric Test on the submenu.
c. Go to and then Click on Chi-Square Test.
d. Select the two random numbers generators and move them to the variable list.
e. The results generated for the Chi-Square Test.
iv.
For uniform distributions of random numbers from the generators, then the expectations is that there
should be 1 appearance or occurrence for the number 1 or 2 or 3 and so on to the end of the dimension
of the numbers, 100. Then how often each of the numbers appears will be expected to have a frequency
of 1.0 or 1.1 averagely for all the numbers from 1 to 100 ranges. The observed frequency is the real
frequency for every one of the numbers generated by the generator with the ranges of numbers
specified. The Chi-Square value or statistics is calculated from the difference between the observed
frequency of the test and that of the expected frequency of every one of the numbers generated as
follows:
R
χ2 = ∑
i =1
(Oi − Ei ) 2
Ei
R is the distinct individual random numbers likely (R = 100), the number of the observed frequencies of
happenings or occurrences for the random numbers (i) is Oi and where the frequency of expectation or the
expected frequency is Ei for the random integer or number i. Since the expectation is that the integer distribution
must uniformly spread, then the occurrences for every random number or integer must have equal expected
frequencies. Ei can then be computed with N as the total number of observations, using the equation above.
There is one way to explain the above percentages namely; it explains the level of non-randomness
suspicions, that is, the degree to which the series of random numbers generated is assumed of being non-random.
If the percentages are between 90% and 95%, 10% and 5% shows the series is “almost suspect”. If the
percentage is in the range of 99% and 95% or in the range of 1% and 5%, then the series is suspect. Finally, if
the percentage is greater than 99% or less than 1%, then the series is almost certainly not random.
3.2 Kolmogorov-Smirnov Test (KS-test)
The Kolmogorov-Smirnov (K-S) test is based on the Empirical Distribution Function (ECDF).
Given N ordered data points Y1, Y2, ..., YN, the ECDF is defined as
E N = n (i ) N
where n(i) is the number of points less than Yi and the Yi are ordered from smallest to largest value. This is a step
function that increases by 1/N at the value of each ordered data point.
The procedures used to perform the Kolmogorov-Smirnov test (KS-test) for the two random numbers
generators namely: Fibonacci Random Number Generator and Gaussian Random Number Generator are as
52
4. Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol.4, No.10, 2013
www.iiste.org
follows:
i.
Java Programming codes are written for each of Fibonacci Random Number Generator and
Gaussian Random Number Generator.
This test involved produces sequences of 100 random integers between 0 and 1000 using the
Java codes for each of Fibonacci Random Number Generator and Gaussian Random Numbers
Generator.
The generated random numbers in (ii) above is coded in Statistical Package for Social Science
(SPSS) software to test for randomness of the numbers in the generators. The procedures
include:
a. Go to and click on the Analyse Menu on the SPSS bar after the data has been coded.
b. Choose Nonparametric Test on the submenu.
c. Go to and then Click on One Sample KS-Test.
d. Select the two random numbers generators and move them to the variable list and check
Normal.
e. Click on Exact tap and select asymptotic option and then click OK.
ii.
iii.
4. Results
4.1 Gaussian Random Number Generator
This generator produces smaller random numbers at the beginning and at the end of the range used,
while producing bigger numbers at the middle of the range of the numbers generated as shown in Figure 1.
However abnormal uniformity occurs at the range of -100 to 50. The generator produces random numbers that
are much skewed to the center, hence produces a substantially good standard deviation of 68.515. This standard
deviation value shows that the generator obeys the normal distribution curve.
Table 1
The descriptive statistics of the Random Numbers Generators
Generators
N
Mean
Std.
Minimum
Maximum
Deviation
Gaussian Random
Generator
Fibonacci Random
Generator
Number
100
5.08
68.51
Number
100
-90142675.73
975930540
-121
126
-2092787285
2.E9
Gausian Random Number Generator
150
Mean =5.08
Stdev=68.515
N = 100
50
0
1
5
9
13
17
21
25
29
33
37
41
45
49
53
57
61
65
69
73
77
81
85
89
93
97
Random numbers
100
-50
-100
-150
Range of generation
Figure 1: The trend of randomness Gaussian Random number Generator
53
5. Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol.4, No.10, 2013
www.iiste.org
4.2 Fibonacci Random Number Generator
From the results generated, it is found out that the Fibonacci Random Number Generator started with
smaller random numbers at the beginning, elevates gradually, gets to its peak at a value of 0.0E0 and declines
gradually to the end of the range as in Figure 2. This result tends to obey the normal distribution curve more
than all the other generators studied, with standard deviation of 9.759E8.This represents the greater deviation of
individual numbers produced by the generator from each other and hence not normally distributed.
Fibonacci Random Number Generator
Mean =-90142675.73
Stdev=9.759E8
N=100
2.5E+09
2E+09
1E+09
500000000
0
-5E+08
1
5
9
13
17
21
25
29
33
37
41
45
49
53
57
61
65
69
73
77
81
85
89
93
97
Random numbers
1.5E+09
-1E+09
-1.5E+09
-2E+09
-2.5E+09
Range of Generation
Figure 2: The trend of randomness in Fibonacci Random Number Generator
The researchers compared the two Pseudo Random Number Generators (PRNGs) studied namely,
Fibonacci Random Number Generator (FRNG) and Gaussian Random Number Generator (GRNG). The results
of the repetition of the two generators were used to draw a line graph as shown in Figure 3. The figure shows a
line graph of frequencies for the two random numbers generators. It was found out that the random number
generator with the highest number of repeated numbers is Gaussian Random Number Generator, while Fibonacci
Random Number Generator had no repeated numbers in the random numbers. This means that numbers were
more likely to be random with Fibonacci Random Number Generator compared with the Gaussian Random
Number Generator algorithm as shown in Figure 3.
54
6. Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol.4, No.10, 2013
www.iiste.org
Numbers Repetition
4.5
4
frequency
3.5
3
2.5
2
GRNG
1.5
FRNG
1
0.5
0
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96
Random numbers generated
Figure 3: The number of repetitions in Pseudo Random Number Generators
4.3 The Chi-Square Test for Independence of Random Numbers
The analysis of the factors responsible for randomness in a random number generator showed that factors
tend to give many considerations to Fibonacci Random Number Generator than the Gaussian Random Number
Generators compared with. This is because the Chi-Square analysis for the independence of numbers in the
generators showed that numbers were more independent to each other in the Fibonacci Random Number
Generator (Chi-Square Value = 0.000) than the Gaussian Random Number Generators under studied. This is
because independence of numbers increases with the decrease in Chi-Square Value and vice versa.
However, the worse generator in terms of independence is Gaussian Random Number Generator (ChiSquare value = 14.400) as shown in Table 2. This is because it produces a higher Chi-Square Value from the
test. This also means that numbers in the GRNG were more likely to depend on each other than the Fibonacci
Random Number Generator. Table 2 represents the descriptive statistics of the analysis (Standard Deviation,
Minimum, and Maximum & Mean). Standard deviation of numbers increases with decreasing normality and vice
versa. This means the higher the standard deviation value, the higher the deviation from the normal and vice
versa. Therefore Fibonacci Random Number Generator is more deviated from the normal distribution, while
Gaussian Random Number Generator produces numbers that obeys the normal distribution far more than the
Fibonacci Random Generator as shown in Table 2.
Table 2
The Chi-Square Test Result for Pseudo-Random Number Generator
Test Statistics
Gaussian Random
Fibonacci Random
Number Generator
Number Generator
14.400a
.000b
Chi-Square
87
99
Df
1.000
1.000
Asymp.Sig.
1. 000a
1.000b
Monte Carlo Sig.
99%
Confidence
Interval
1.000
1.000
Lower Bound
1.000
1.000
Upper Bound
a. 88 cells (100.0%) have expected frequencies less than 5. The minimum expected cell frequency is 1.1.
b. . 100 cells (100.0%) have expected frequencies less than 5. The minimum expected cell frequency is
1.0.
4.4 The Kolmogorov-Smirnov Test for Uniformity of random numbers
In Kolmogorov-Smirnov, the more the z-value falls in between the lower and upper bound of the Monte
55
7. Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol.4, No.10, 2013
www.iiste.org
carlo Significance.(2-tailed) Value, the more uniform the numbers generated by the random number generator is
and vice versa. From Table 3, the Gaussian Random Number Generator produces more uniform numbers than its
counterpart Fibonacci Random Number Generator. This is because Gaussian Random Number Generator
produces z-value that fall closer between the lower and upper bounds of z-value, while Fibonacci Random
Number Generator produces z-value that fall outside the range of lower and upper bounds z-value as shown in
Table 3. The factors responsible for decision also showed that factors tend to give more considerations to
Fibonacci Random Numbers Generator (K-S value = 2.020) than the Gaussian Random Number Generator
compared with in this paper. This therefore makes Fibonacci Random Number Generator less uniform than the
Gaussian Random Number Generator (K-S value = 0.725) as shown in Table 3.
However, the lesser the uniformity of numbers produced, the more independent numbers are and vice versa.
This accounts for why Fibonacci Random Number Generator is more independent of the two generators.
Table 3
The Kolmogorov-Smirnov Test for Uniformity of random numbers
Gaussian
Random
Statistics
Number Generator
Minimum
100
-121
Maximum
Absolute
N
Uniform Parameters
126
0.072
Fibonacci
Random Number
Generator
100
2092787285
2144908973
0.202
Most Extreme Differences
Positive
Negative
Kolmogorov-Smirnov Z
Asymp.Sig. (2-tailed)
Sig.
Monte carlo.Sig.(2- 90%Confidence
tailed)
Interval
0.072
-0.060
Lower
Bound
Upper
Bound
5.
0.202
-0.164
0.725
0.670
0.640
0.561
2.020
0.001
0.000
0.000
0.719
0.23
Discussions
The analysis of factors for uniformity also shows that factors tend to give more considerations to
Gaussian Random Number Generator than the Fibonacci generator. This therefore makes Fibonacci random
Number Generator less uniform and more secure than Gaussian Random Numbers Generator.
A good PRNG will produce a sequence of numbers that cannot be easily guessed or determined by an
adversary. The general assumption is that the opponent knows the algorithm being used. This is usually referred
to as Kerckhoff's principle (Yehuda, 2006). This assertion is evidenced in the findings relating to Fibonacci
Random Number Generator and the Gaussian Random Number Generator. The Chi-Square analysis for the
independence of numbers in the generators showed that numbers were more independent to each other in the
Fibonacci Random Number Generator than the other generator under study and therefore less likely to be
guessed by an adversary.
According to Andrew et al. (2010), another factor to be considered for a good PRNG is that the series
generated should be statistically random. It should be able to hide all patterns and correlations. This can be tested
by some statistical tests. A PRNG should have the statistical property that each value over the interval has an
equally likely occurrence. In the case of the interval [0,1] each number would appear with the frequency 1/2 in a
long run. Each pair of numbers would appear with frequency 1/4, and each triplet with frequency 1/8. This
would form a basis of observation that the numbers being generated are unbiased and successive outcomes are
uncorrelated (James et al., 2003). Mean should be close to 0.5 and variance 1/12 or 0.08 for uniformly
distributed pseudorandom numbers. This is also evidenced more in Gaussian Random Number Generator with
lower Variance (square root of standard deviation) compared with that of Fibonacci Random Number Generator.
Again, Andrew et al. (2010) stated that Frequency (Monobits) Test is also another test for efficiency.
The focus of the test is the proportion of zeroes and ones for the entire sequence. The purpose of this test is to
determine whether that number of ones and zeros in a sequence are approximately the same as would be
56
8. Computer Engineering and Intelligent Systems
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol.4, No.10, 2013
www.iiste.org
expected for a truly random sequence. The test assesses the closeness of the fraction of ones to ½, that is, the
number of ones and zeroes in a sequence should be about the same. Test for Frequency within a Block: the focus
of the test is the proportion of zeroes and ones within M-bit blocks. This test also determines whether the
frequency of ones in an M-bit block is approximately M/2. In the analysis of the factors responsible for decision
to be taken on the two random number generators revealed that Fibonacci Random Number Generator produces
random numbers that have less number of ones and zeroes to be about the same than the Gaussian Random
Number Generator and hence makes numbers generated by former more independent of each other.
6. Conclusions
The discussions of the findings indicate that Gaussian Random Number Generator Algorithm is more uniform
than the Fibonacci Random Number Generator Algorithm. Consequently, Gaussian Random Number Generator
algorithm is less secure than the Fibonacci Random Number Generator Algorithm, and therefore makes the
Fibonacci Random Number Generator more efficient than the Gaussian Random Number Generator.
It is therefore recommended that the Fibonacci Random Number Generator Algorithm, instead of the
Gaussian Random Number Generator, is used for better data security of enterprise software applications in a
cryptographic system.
References
Andrew, R., Juan, S., James, N., Miles, S., Elaine, B., & Stefan, L. (2010). A statistical test suite for random
and pseudorandom number generators for cryptographic application, Revision 1a, USA. Special
Publication 800-22 (http://csrc.nist.gov/publications/nistpubs/800-22- rev1a/SP800-22rev1a.pdf.
Accessed 2012, September 1).
AuthenTec Embedded Security Solutions (2010). The importance of true randomness in cryptography.
(http://www.authentec.com/Portals/5/Documents/TRNG%20Whitepaper_91411.pdf. Accessed 2012,
January 14).
Biege,
T.
(2006).
Analysis
of
a
strong
pseudo
random
number
generator.
(
http://www.suse.de/~thomas/papers/random-analysis.pdf. Accessed 2012, February).
Christophe, D., & Diethelm, W. (2003).
A note on random number generation. (http://cran.r
project.org/web/packages/ randtoolbox/ vignettes/fullpres.pdf. Accessed 2012, September 7).
Eastlake, D., Schiller, J. & Crocker, S. (2005). RFC 4086. (http://www.ietf.org/ rfc/rfc4086.txt. Accessed 2012,
June 10).
Fisnik, H. (2011). Safe internet banking. (http://www.itknowledge24.com /blog/safe-internet-banking/ Accessed
2012, June 10).
Gary, C. K. (2012). An overview of cryptography. (http://www.garykeSecure Socket Layerer.
net/
library/crypto.html. Accessed 2012, August 10).
Justin,
A.
(2012).
Progressing
past
poor
pseudo-randomness,
Cosmos
Cluster
4.
(http://cosmos.ucdavis.edu/archives/2012/cluster4/Adsuara,%20Justin. pdf. Accessed 2012, August
14).
Mike, H., Paul, K., & Mark, E. M.(2012). Analysis of Intel’s Ivy Bridge digital random number generator.
(http://www.cryptography.com/public /pdf/Intel_TRNG_Report_20120312.pdf. Accessed 2012, August
20.)
Mohammed, A., & Annapurna, P., P. (2012.). Implementing a secure key issuing scheme for communication in
p2p networks. M.S.Ramiah Institute of Technology,Department of Computer Science and Engineering,
Bangalore- 560054. India.
Yehuda, L. (2006). Introduction to cryptography, 89-656. (http://u.cs.biu.ac.il/~lindell/89-656/Intro-to-crypto89-656.pdf. Accessed 2012, August 1).
Yuval, I. (2011). Theory of cryptography. 8th Theory of Cryptography Conference, TCC 2011 Providence, RI,
USA.
57
9. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
CALL FOR JOURNAL PAPERS
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. There’s no deadline for
submission. Prospective authors of IISTE journals can find the submission
instruction on the following page: http://www.iiste.org/journals/
The IISTE
editorial team promises to the review and publish all the qualified submissions in a
fast manner. All the journals articles are available online to the readers all over the
world without financial, legal, or technical barriers other than those inseparable from
gaining access to the internet itself. Printed version of the journals is also available
upon request of readers and authors.
MORE RESOURCES
Book publication information: http://www.iiste.org/book/
Recent conferences: http://www.iiste.org/conference/
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar