Symmetric cryptography will always produce the same ciphertext if the plaintext and the given key are the same repeatedly. This condition will make it easier for cryptanalysts to perform cryptanalysis. This research introduces a one-to-many cryptography scheme, which can produce different ciphertexts even if the input given is the same repeatedly. The one-to-many encryption scheme can produce several ciphertexts with differences of up to 50%. The avalanche effect test obtained an average of 52.20%, better than modern cryptography Blowfish by 25.46% and 6% better than advanced encryption standard (AES). One-to-many can produce different n-ciphertexts, which will certainly make it more difficult for cryptanalysts to perform cryptanalysis and require n-times longer to break than other symmetric cryptography.
Data Security Using Elliptic Curve CryptographyIJCERT
Cryptography technique is used to provide data security. In existing cryptography technique the key generation takes place randomly. Key generation require shared key. If shared key is access by unauthorized user then security becomes disoriented. Hence existing problems are alleviated to give more security to data. In proposed system a algorithm called as Elliptic Curve Cryptography is used. The ECC generates the key by using the point on the curve. The ECC is used for generating the key by using point on the curve and encryption and decryption operation takes place through curve. In the proposed system the encryption and key generation process takes place rapidly.
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”IOSR Journals
Abstract: Elliptic Curve Cryptography (ECC) plays an important role in today’s public key based security
systems. . ECC is a faster and more secure method of encryption as compared to other Public Key
Cryptographic algorithms. This paper focuses on the performance advantages of using ECC in the wireless
network. So in this paper its algorithm has been implemented and analyzed for various bit length inputs. The
Private key is known only to sender and receiver and hence data transmission is secure.
This document presents an improved asymmetric key encryption algorithm using MATLAB. It begins with an introduction to asymmetric key cryptography and the RSA cryptosystem. It then describes a modified RSA algorithm using multiple public and private keys to increase security. Next, it explains how to implement RSA using the Chinese Remainder Theorem to reduce computational time. The document implements the original, modified, and CRT-based RSA algorithms in MATLAB and analyzes computation time versus number of prime numbers. It concludes the modified and CRT-based approaches provide more security than the original RSA algorithm with reduced computational time.
Advanced approach for encryption using advanced encryption standard with chao...IJECEIAES
At present, security is significant for individuals and organizations. All information need security to prevent theft, leakage, alteration. Security must be guaranteed by applying some or combining cryptography algorithms to the information. Encipherment is the method that changes plaintext to a secure form called cipherment. Encipherment includes diverse types, such as symmetric and asymmetric encipherment. This study proposes an improved version of the advanced encryption standard (AES) algorithm called optimized advanced encryption standard (OAES). The OAES algorithm utilizes sine map and random number to generate a new key to enhance the complexity of the generated key. Thereafter, multiplication operation was performed on the original text, thereby creating a random matrix (4×4) before the five stages of the coding cycles. A random substitution-box (S-Box) was utilized instead of a fixed S-Box. Finally, we utilized the eXclusive OR (XOR) operation with digit 255, also with the key that was generated last. This research compared the features of the AES and OAES algorithms, particularly the extent of complexity, key size, and number of rounds. The OAES algorithm can enhance complexity of encryption and decryption by using random values, random S-Box, and chaotic maps, thereby resulting in difficulty guessing the original text.
A Novel Secure Combination Technique of Steganography and CryptographyZac Darcy
A new technique proposed with the combination of cryptography and steganography enhanced with new
secure feature for generating a new security system. Cryptography and Steganography are two popular
ways for secure data transmission in which the former distorts a message so it cannot be understood and
another hides a message so it cannot be seen. In cryptography, this system is used advanced encryption
standard (AES) algorithm to encrypt secret message and then these are separated keys; one of which is
used to hide in cover image. In steganography, a part of encrypted message as a key is used to hide in
discrete cosine transform (DCT) of an image which is highly secured. This kind of system is to be
introduced in applications such as transferring secret data that can be authentication of various fields.
A novel secure combination technique of steganography and cryptographyZac Darcy
A new technique proposed with the combination of cryptography and steganography enhanced with new
secure feature for generating a new security system. Cryptography and Steganography are two popular
ways for secure data transmission in which the former distorts a message so it cannot be understood and
another hides a message so it cannot be seen. In cryptography, this system is used advanced encryption
standard (AES) algorithm to encrypt secret message and then these are separated keys; one of which is
used to hide in cover image. In steganography, a part of encrypted message as a key is used to hide in
discrete cosine transform (DCT) of an image which is highly secured. This kind of system is to be
introduced in applications such as transferring secret data that can be authentication of various fields.
An Advance Approach of Image Encryption using AES, Genetic Algorithm and RSA ...IJEACS
In current scenario the entire world is moving towards digital communication for fast and better communication. But in this a problem rises with security i.e. when we have to store information (either data or image) at any casual location or transmit information through internet. As internet is an open transmission medium, security of data becomes very important. To defend our information from piracy or from hacking we use a technique and i.e. known as Encryption Technique. In this paper, we use image as information and use an advance approach of well-known encryption techniques like AES, Genetic Algorithm, and RSA algorithm to encrypt it and keep our information safe from hackers or intruders making it highly difficult and time consuming to decipher the image without using the key.
Pairing Based Elliptic Curve Cryptosystem for Message AuthenticationIJTET Journal
This document summarizes a research paper on using elliptic curve cryptography for message authentication. It begins with an introduction to elliptic curve cryptography and how it can provide equivalent security to other public key encryption methods but with smaller key sizes. It then describes the proposed methodology which includes generating an ECC key pair, encrypting a message with the public key, transmitting the encrypted message, and decrypting it with the private key. The results show a message being encrypted and decrypted correctly using this ECC process. It concludes that ECC can provide an efficient method for authentication in systems like vehicular networks due to its lower computation and communication overhead compared to other encryption methods.
Data Security Using Elliptic Curve CryptographyIJCERT
Cryptography technique is used to provide data security. In existing cryptography technique the key generation takes place randomly. Key generation require shared key. If shared key is access by unauthorized user then security becomes disoriented. Hence existing problems are alleviated to give more security to data. In proposed system a algorithm called as Elliptic Curve Cryptography is used. The ECC generates the key by using the point on the curve. The ECC is used for generating the key by using point on the curve and encryption and decryption operation takes place through curve. In the proposed system the encryption and key generation process takes place rapidly.
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”IOSR Journals
Abstract: Elliptic Curve Cryptography (ECC) plays an important role in today’s public key based security
systems. . ECC is a faster and more secure method of encryption as compared to other Public Key
Cryptographic algorithms. This paper focuses on the performance advantages of using ECC in the wireless
network. So in this paper its algorithm has been implemented and analyzed for various bit length inputs. The
Private key is known only to sender and receiver and hence data transmission is secure.
This document presents an improved asymmetric key encryption algorithm using MATLAB. It begins with an introduction to asymmetric key cryptography and the RSA cryptosystem. It then describes a modified RSA algorithm using multiple public and private keys to increase security. Next, it explains how to implement RSA using the Chinese Remainder Theorem to reduce computational time. The document implements the original, modified, and CRT-based RSA algorithms in MATLAB and analyzes computation time versus number of prime numbers. It concludes the modified and CRT-based approaches provide more security than the original RSA algorithm with reduced computational time.
Advanced approach for encryption using advanced encryption standard with chao...IJECEIAES
At present, security is significant for individuals and organizations. All information need security to prevent theft, leakage, alteration. Security must be guaranteed by applying some or combining cryptography algorithms to the information. Encipherment is the method that changes plaintext to a secure form called cipherment. Encipherment includes diverse types, such as symmetric and asymmetric encipherment. This study proposes an improved version of the advanced encryption standard (AES) algorithm called optimized advanced encryption standard (OAES). The OAES algorithm utilizes sine map and random number to generate a new key to enhance the complexity of the generated key. Thereafter, multiplication operation was performed on the original text, thereby creating a random matrix (4×4) before the five stages of the coding cycles. A random substitution-box (S-Box) was utilized instead of a fixed S-Box. Finally, we utilized the eXclusive OR (XOR) operation with digit 255, also with the key that was generated last. This research compared the features of the AES and OAES algorithms, particularly the extent of complexity, key size, and number of rounds. The OAES algorithm can enhance complexity of encryption and decryption by using random values, random S-Box, and chaotic maps, thereby resulting in difficulty guessing the original text.
A Novel Secure Combination Technique of Steganography and CryptographyZac Darcy
A new technique proposed with the combination of cryptography and steganography enhanced with new
secure feature for generating a new security system. Cryptography and Steganography are two popular
ways for secure data transmission in which the former distorts a message so it cannot be understood and
another hides a message so it cannot be seen. In cryptography, this system is used advanced encryption
standard (AES) algorithm to encrypt secret message and then these are separated keys; one of which is
used to hide in cover image. In steganography, a part of encrypted message as a key is used to hide in
discrete cosine transform (DCT) of an image which is highly secured. This kind of system is to be
introduced in applications such as transferring secret data that can be authentication of various fields.
A novel secure combination technique of steganography and cryptographyZac Darcy
A new technique proposed with the combination of cryptography and steganography enhanced with new
secure feature for generating a new security system. Cryptography and Steganography are two popular
ways for secure data transmission in which the former distorts a message so it cannot be understood and
another hides a message so it cannot be seen. In cryptography, this system is used advanced encryption
standard (AES) algorithm to encrypt secret message and then these are separated keys; one of which is
used to hide in cover image. In steganography, a part of encrypted message as a key is used to hide in
discrete cosine transform (DCT) of an image which is highly secured. This kind of system is to be
introduced in applications such as transferring secret data that can be authentication of various fields.
An Advance Approach of Image Encryption using AES, Genetic Algorithm and RSA ...IJEACS
In current scenario the entire world is moving towards digital communication for fast and better communication. But in this a problem rises with security i.e. when we have to store information (either data or image) at any casual location or transmit information through internet. As internet is an open transmission medium, security of data becomes very important. To defend our information from piracy or from hacking we use a technique and i.e. known as Encryption Technique. In this paper, we use image as information and use an advance approach of well-known encryption techniques like AES, Genetic Algorithm, and RSA algorithm to encrypt it and keep our information safe from hackers or intruders making it highly difficult and time consuming to decipher the image without using the key.
Pairing Based Elliptic Curve Cryptosystem for Message AuthenticationIJTET Journal
This document summarizes a research paper on using elliptic curve cryptography for message authentication. It begins with an introduction to elliptic curve cryptography and how it can provide equivalent security to other public key encryption methods but with smaller key sizes. It then describes the proposed methodology which includes generating an ECC key pair, encrypting a message with the public key, transmitting the encrypted message, and decrypting it with the private key. The results show a message being encrypted and decrypted correctly using this ECC process. It concludes that ECC can provide an efficient method for authentication in systems like vehicular networks due to its lower computation and communication overhead compared to other encryption methods.
A New Security Level for Elliptic Curve Cryptosystem Using Cellular Automata ...Editor IJCATR
Elliptic curve cryptography (ECC) is an effective approach to protect privacy and security of information. Encryption
provides only one level of security during transmission over the channel. Hence there is a need for a stronger encryption which is very
hard to break. So, to achieve better results and improve security, information has to pass through several levels of encryption. The aim
of this paper would be to provide two levels of security. First level comprises of plaintext using as security key compressed block to
encrypt text based ECC technique and the second level comprises of scrambling method with compression using 2D Cellular rules. In
particular, we propose an efficient encryption algorithm based ECC using Cellular automata and it is termed as Elliptic Curve
Cryptosystem based Cellular Automata (ECCCA). This paper presents the implementation of ECCCA for communication over
insecure channel. The results are provided to show the encryption performance of the proposed method.
Secure E-voting System by Utilizing Homomorphic Properties of the Encryption ...TELKOMNIKA JOURNAL
The use of cryptography in the e-voting system to secure data is a must to ensure the authenticity
of the data. In contrast to common encryption algorithms, homomorphic encryption algorithms had unique
properties that can perform mathematical operations against ciphertext. This paper proposed the use of
the Paillier and Okamoto-Uchiyama algorithms as two homomorphic encryption algorithms that have the
additional properties so that it can calculate the results of voting data that has been encrypted without
having to be decrypted first. The main purpose is to avoid manipulation and data falsification during vote
tallying process by comparing the advantages and disadvantages of each algorithm.
Presentation on Cryptography_Based on IEEE_PaperNithin Cv
The document summarizes a seminar report on a hybrid cryptography architecture that uses multiple cryptographic algorithms. It discusses Elliptic Curve Cryptography (ECC), Elliptic Curve Diffie-Hellman (ECDH), Elliptic Curve Digital Signature Algorithm (ECDSA), and Dual-RSA. ECC is used to encrypt data onto an elliptic curve. ECDH generates shared secret keys between two parties for use in symmetric encryption. ECDSA allows digital signatures using elliptic curve parameters. Dual-RSA improves decryption efficiency using the Chinese Remainder Theorem to split computations between prime factors p and q. The hybrid architecture combines the strengths of these algorithms to provide secure encryption, authentication, and key exchange.
Multiple Dimensional Fault Tolerant Schemes for Crypto Stream CiphersIJNSA Journal
This document proposes two fault tolerant schemes for stream ciphers based on Algorithm Based Fault Tolerance (ABFT). The first is a 2-D mesh ABFT scheme that can detect and correct any single error in an n-by-n plaintext matrix with linear computation and bandwidth overhead. It constructs matrices for the plaintext, keystream, and transmitted data with row and column checksums. The second is a 3-D mesh-knight ABFT scheme that can detect and correct up to three errors by adding an extra "knight" checksum dimension. Both schemes use only XOR operations and allow errors to be efficiently located and recovered.
Multiple Dimensional Fault Tolerant Schemes for Crypto Stream CiphersIJNSA Journal
To enhance the security and reliability of the widely-used stream ciphers, a 2-D and a 3-D mesh-knight Algorithm Based Fault Tolerant (ABFT) schemes for stream ciphers are developed which can be universally applied to RC4 and other stream ciphers. Based on the ready-made arithmetic unit in stream ciphers, the proposed 2-D ABFT scheme is able to detect and correct any simple error, and the 3-D meshknight ABFT scheme is capable of detecting and correcting up to three errors in an n2 -data matrix with liner computation and bandwidth overhead. The proposed schemes provide one-to-one mapping between data index and check sum group so that error can be located and recovered by easier logic and simple operations.
A Secure Encryption Technique based on Advanced Hill Cipher For a Public Key ...IOSR Journals
This document presents a secure encryption technique based on an advanced Hill cipher for a public key cryptosystem. The technique uses an involutory matrix and permuted key to encrypt plaintext into ciphertext. It further encrypts the ciphertext through two levels of scrambling and adds tamper detection by calculating and transmitting the determinant of the ciphertext matrix. The decryption process reverses these steps to recover the original plaintext. The technique aims to make the cipher highly secure against cryptanalytic attacks by introducing multiple transformations and ensuring the integrity of the ciphertext through determinant verification.
Domain Examination of Chaos Logistics Function As A Key Generator in Cryptogr...IJECEIAES
The use of logistics functions as a random number generator in a cryptography algo- rithm is capable of accommodating the diffusion properties of the Shannon principle. The problem that occurs is initialization x was static and was not affected by changes in the key, so that the algorithm will generate a random number that is always the same. This study design three schemes that can providing the flexibility of the input keys in conducting the examination of the value of the domain logistics function. The results of each schemes do not show a pattern that is directly proportional or inverse with the value of x 0 and relative error x 0 0 and successfully fulfill the properties of the butterfly effect. Thus, the existence of logistics functions in generating chaos numbers can be accommodated based on key inputs. In addition, the resulting random numbers are distributed evenly over the chaos range, thus reinforcing the algorithm when used as a key in cryptography.
This document summarizes a research paper that proposes a new approach for complex encryption and decryption of data. The approach uses a combination of public key infrastructure and RC6 algorithm. It divides plaintext into blocks, uses one block as an encryption key, and inserts the key into the ciphertext based on a private position. Performance analysis shows the proposed approach encrypts and decrypts data faster than the AES algorithm. Security analysis indicates the approach is secure against known attacks based on correlation analysis and information entropy tests. The approach provides improved security and performance for encrypting network data.
On the Usage of Chained Codes in CryptographyCSCJournals
This document summarizes a research paper on using randomized chained linear codes for digital signatures. The summary is:
1) Randomized chained linear codes are proposed to address attacks on previous signature schemes that used regular chained codes. Random vectors are concatenated to the generator matrix of a chained code to create randomized chained codes.
2) A digital signature scheme is presented that uses randomized chained codes. The private key consists of the generator matrix and randomization matrices. The public key is the randomized parity check matrix. Signatures are created using the chain code decoding algorithm.
3) Security analysis shows the scheme is secure if the code length is over 1350 bits, preventing an attacker from determining the private key from the public information
SECURED WIRELESS COMMUNICATION THROUGH SIMULATED ANNEALING GUIDED TRAINGULARI...cscpconf
In this paper, simulated annealing guided traingularized encryption using multilayer perceptron generated session key (SATMLP) has been proposed for secured wireless ommunication. Bothsender and receiver station uses identical multilayer perceptron and depending on the final output of the both side multilayer perceptron, weights vector of hidden layer get tuned in bothends. After this tunning step both perceptrons generates identical weight vectors which is consider as an one time session key. In the 1st level of encryption process traingularized sub key1 is use to encrypt the plain text. In 2nd level of encryption simulated annealing method helps to generates sub key 2 for further encryption of 1st level generated traingularized encrypted text. Finally multilayer perceptron generated one time session key is used to perform
3rd level of encryption of 2nd level generated encrypted text. Recipient will use same identical multilayer perceptron guided session key for performing deciphering process for getting the simulated annealing guided intermediate cipher text. Then using sub key 2 deciphering technique is performed to get traingularized encrypted text. Finally sub key 1 is used to generate the plain text. In this proposed technique session key is not transmitted over public channel apart from few data transfers are needed for weight simulation process. Because after synchronization process both multilayer perceptron generates identical weight vector which acts as a session key. Parametric tests are done and results are compared in terms of ChiSquare test, response time in transmission with some existing classical techniques, which shows comparable results for the proposed system.
Secured wireless communication through simulated annealing guided traingulari...csandit
In this paper, simulated annealing guided traingularized encryption using multilayer perceptron
generated session key (SATMLP) has been proposed for secured wireless communication. Both
sender and receiver station uses identical multilayer perceptron and depending on the final
output of the both side multilayer perceptron, weights vector of hidden layer get tuned in both
ends. After this tunning step both perceptrons generates identical weight vectors which is
consider as an one time session key. In the 1st level of encryption process traingularized sub
key1 is use to encrypt the plain text. In 2nd level of encryption simulated annealing method
helps to generates sub key 2 for further encryption of 1st level generated traingularized
encrypted text. Finally multilayer perceptron generated one time session key is used to perform
3rd level of encryption of 2nd level generated encrypted text. Recipient will use same identical
multilayer perceptron guided session key for performing deciphering process for getting the
simulated annealing guided intermediate cipher text. Then using sub key 2 deciphering
technique is performed to get traingularized encrypted text. Finally sub key 1 is used to
generate the plain text. In this proposed technique session key is not transmitted over public
channel apart from few data transfers are needed for weight simulation process. Because after
synchronization process both multilayer perceptron generates identical weight vector which
acts as a session key. Parametric tests are done and results are compared in terms of Chi-
Square test, response time in transmission with some existing classical techniques, which shows
comparable results for the proposed system.
New Technique Using Multiple Symmetric keys for Multilevel EncryptionIJERA Editor
In a world of accelerating communications, cryptography has become an essential component of the modern
means of communication systems. The emergence of the webas a reliable medium for commerce and
communication has made cryptography an essential component. Many algorithms or ciphers are in use
nowadays. The quality of the cipher is judged byits ability to prevent an unrelated party fromknowingthe
original content of the encrypted message. The proposed “Multilevel Encryption Model” is a cryptosystem that
adopts the basic principles of cryptography. It uses five symmetric keys (multiple)
in floating point numbers, plaintext, substitution techniques and key combinations with unintelligible
sequence to produce the ciphertext. The decryption process is also designed to reproduce the plaintext
Implementation of RSA Algorithm with Chinese Remainder Theorem for Modulus N ...CSCJournals
Cryptography has several important aspects in supporting the security of the data, which
guarantees confidentiality, integrity and the guarantee of validity (authenticity) data. One of the
public-key cryptography is the RSA cryptography. The greater the size of the modulus n, it will be
increasingly difficult to factor the value of n. But the flaws in the RSA algorithm is the time
required in the decryption process is very long. Theorem used in this research is the Chinese
Remainder Theorem (CRT). The goal is to find out how much time it takes RSA-CRT on the size
of modulus n 1024 bits and 4096 bits to perform encryption and decryption process and its
implementation in Java programming. This implementation is intended as a means of proof of
tests performed and generate a cryptographic system with the name "RSA and RSA-CRT Text
Security". The results of the testing algorithm is RSA-CRT 1024 bits has a speed of
approximately 3 times faster in performing the decryption. In testing the algorithm RSA-CRT 4096
bits, the conclusion that the decryption process is also effective undertaken more rapidly.
However, the flaws in the key generation process and the RSA 4096 bits RSA-CRT is that the
time needed is longer to generate the keys.
Ecc cipher processor based on knapsack algorithmAlexander Decker
This document describes a method for encrypting messages using Elliptic Curve Cryptography (ECC) combined with the knapsack algorithm. It begins by explaining the basics of ECC, including defining elliptic curves over a finite field and describing point addition and doubling operations. It then presents algorithms for the full encryption/decryption process. The process involves first transforming the message into points on an elliptic curve, then applying the knapsack algorithm to further encrypt the ECC-encrypted message before transmission. Decryption reverses these steps to recover the original message. The combination of ECC and knapsack encryption is presented as an innovation that provides increased security over traditional ECC alone.
This document discusses a proposed design for secure military communications using AES encryption with Vedic mathematics, OFDM modulation, and QPSK. Specifically, it proposes using AES to encrypt data, applying Vedic math techniques to improve efficiency during the MixColumns step. The encrypted data would then be modulated using OFDM and QPSK to provide high throughput communication. Key aspects of the design include AES encryption/decryption, OFDM using QPSK and an IFFT/FFT, and applying Vedic math during AES encryption to reduce complexity and power consumption for military applications.
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Multiple Encryption using ECC and Its Time Complexity AnalysisIJCERT
Rapid growth of information technology in present era, secure communication, strong data encryption technique and trusted third party are considered to be major topics of study. Robust encryption algorithm development to secure sensitive data is of great significance among researchers at present. The conventional methods of encryption used as of today may not sufficient and therefore new ideas for the purpose are to be design, analyze and need to be fit into the existing system of security to provide protection of our data from unauthorized access. An effective encryption/ decryption algorithm design to enhance data security is a challenging task while computation, complexity, robustness etc. are concerned. The multiple encryption technique is a process of applying encryption over a single encryption process in a number of iteration. Elliptic Curve Cryptography (ECC) is well known and well accepted cryptographic algorithm and used in many application as of today. In this paper, we discuss multiple encryptions and analyze the computation overhead in the process and study the feasibility of practical application. In the process we use ECC as a multiple-ECC algorithm and try to analyze degree of security, encryption/decryption computation time and complexity of the algorithm. Performance measure of the algorithm is evaluated by analyzing encryption time as well as decryption time in single ECC as well as multiple-ECC are compared with the help of various examples.
This document proposes an efficient and secure cryptography technique using unimodular matrices. It aims to improve data security during transmission by encrypting messages. The proposed method encrypts data into a matrix form using ASCII values, then multiplies it by an encoding matrix generated from an Armstrong number. At the receiver end, the cipher text matrix is decrypted by multiplying it by the inverse of the encoding matrix. This overcomes limitations of previous techniques by allowing any length messages and not relying on Armstrong numbers containing zeros. The method ensures confidentiality, access control and non-repudiation through the use of encryption and decryption with the same secret key.
Novel Algorithm For Encryption:Hybrid of Transposition and Substitution MethodIDES Editor
This paper proposes a novel encryption algorithm that is a hybrid of transposition and substitution methods. The algorithm encrypts messages without using an external key, as the key is derived from characteristics of the original message itself. This solves the problem of securely exchanging keys. Both transposition and substitution have limitations individually, so the hybrid approach results in a more secure cipher. The encryption process involves converting characters to ASCII codes, grouping characters, and reversing the order within groups. Decryption reverses these steps to retrieve the original plaintext. The algorithm aims to provide strong security without relying on external keys.
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
More Related Content
Similar to One to many (new scheme for symmetric cryptography)
A New Security Level for Elliptic Curve Cryptosystem Using Cellular Automata ...Editor IJCATR
Elliptic curve cryptography (ECC) is an effective approach to protect privacy and security of information. Encryption
provides only one level of security during transmission over the channel. Hence there is a need for a stronger encryption which is very
hard to break. So, to achieve better results and improve security, information has to pass through several levels of encryption. The aim
of this paper would be to provide two levels of security. First level comprises of plaintext using as security key compressed block to
encrypt text based ECC technique and the second level comprises of scrambling method with compression using 2D Cellular rules. In
particular, we propose an efficient encryption algorithm based ECC using Cellular automata and it is termed as Elliptic Curve
Cryptosystem based Cellular Automata (ECCCA). This paper presents the implementation of ECCCA for communication over
insecure channel. The results are provided to show the encryption performance of the proposed method.
Secure E-voting System by Utilizing Homomorphic Properties of the Encryption ...TELKOMNIKA JOURNAL
The use of cryptography in the e-voting system to secure data is a must to ensure the authenticity
of the data. In contrast to common encryption algorithms, homomorphic encryption algorithms had unique
properties that can perform mathematical operations against ciphertext. This paper proposed the use of
the Paillier and Okamoto-Uchiyama algorithms as two homomorphic encryption algorithms that have the
additional properties so that it can calculate the results of voting data that has been encrypted without
having to be decrypted first. The main purpose is to avoid manipulation and data falsification during vote
tallying process by comparing the advantages and disadvantages of each algorithm.
Presentation on Cryptography_Based on IEEE_PaperNithin Cv
The document summarizes a seminar report on a hybrid cryptography architecture that uses multiple cryptographic algorithms. It discusses Elliptic Curve Cryptography (ECC), Elliptic Curve Diffie-Hellman (ECDH), Elliptic Curve Digital Signature Algorithm (ECDSA), and Dual-RSA. ECC is used to encrypt data onto an elliptic curve. ECDH generates shared secret keys between two parties for use in symmetric encryption. ECDSA allows digital signatures using elliptic curve parameters. Dual-RSA improves decryption efficiency using the Chinese Remainder Theorem to split computations between prime factors p and q. The hybrid architecture combines the strengths of these algorithms to provide secure encryption, authentication, and key exchange.
Multiple Dimensional Fault Tolerant Schemes for Crypto Stream CiphersIJNSA Journal
This document proposes two fault tolerant schemes for stream ciphers based on Algorithm Based Fault Tolerance (ABFT). The first is a 2-D mesh ABFT scheme that can detect and correct any single error in an n-by-n plaintext matrix with linear computation and bandwidth overhead. It constructs matrices for the plaintext, keystream, and transmitted data with row and column checksums. The second is a 3-D mesh-knight ABFT scheme that can detect and correct up to three errors by adding an extra "knight" checksum dimension. Both schemes use only XOR operations and allow errors to be efficiently located and recovered.
Multiple Dimensional Fault Tolerant Schemes for Crypto Stream CiphersIJNSA Journal
To enhance the security and reliability of the widely-used stream ciphers, a 2-D and a 3-D mesh-knight Algorithm Based Fault Tolerant (ABFT) schemes for stream ciphers are developed which can be universally applied to RC4 and other stream ciphers. Based on the ready-made arithmetic unit in stream ciphers, the proposed 2-D ABFT scheme is able to detect and correct any simple error, and the 3-D meshknight ABFT scheme is capable of detecting and correcting up to three errors in an n2 -data matrix with liner computation and bandwidth overhead. The proposed schemes provide one-to-one mapping between data index and check sum group so that error can be located and recovered by easier logic and simple operations.
A Secure Encryption Technique based on Advanced Hill Cipher For a Public Key ...IOSR Journals
This document presents a secure encryption technique based on an advanced Hill cipher for a public key cryptosystem. The technique uses an involutory matrix and permuted key to encrypt plaintext into ciphertext. It further encrypts the ciphertext through two levels of scrambling and adds tamper detection by calculating and transmitting the determinant of the ciphertext matrix. The decryption process reverses these steps to recover the original plaintext. The technique aims to make the cipher highly secure against cryptanalytic attacks by introducing multiple transformations and ensuring the integrity of the ciphertext through determinant verification.
Domain Examination of Chaos Logistics Function As A Key Generator in Cryptogr...IJECEIAES
The use of logistics functions as a random number generator in a cryptography algo- rithm is capable of accommodating the diffusion properties of the Shannon principle. The problem that occurs is initialization x was static and was not affected by changes in the key, so that the algorithm will generate a random number that is always the same. This study design three schemes that can providing the flexibility of the input keys in conducting the examination of the value of the domain logistics function. The results of each schemes do not show a pattern that is directly proportional or inverse with the value of x 0 and relative error x 0 0 and successfully fulfill the properties of the butterfly effect. Thus, the existence of logistics functions in generating chaos numbers can be accommodated based on key inputs. In addition, the resulting random numbers are distributed evenly over the chaos range, thus reinforcing the algorithm when used as a key in cryptography.
This document summarizes a research paper that proposes a new approach for complex encryption and decryption of data. The approach uses a combination of public key infrastructure and RC6 algorithm. It divides plaintext into blocks, uses one block as an encryption key, and inserts the key into the ciphertext based on a private position. Performance analysis shows the proposed approach encrypts and decrypts data faster than the AES algorithm. Security analysis indicates the approach is secure against known attacks based on correlation analysis and information entropy tests. The approach provides improved security and performance for encrypting network data.
On the Usage of Chained Codes in CryptographyCSCJournals
This document summarizes a research paper on using randomized chained linear codes for digital signatures. The summary is:
1) Randomized chained linear codes are proposed to address attacks on previous signature schemes that used regular chained codes. Random vectors are concatenated to the generator matrix of a chained code to create randomized chained codes.
2) A digital signature scheme is presented that uses randomized chained codes. The private key consists of the generator matrix and randomization matrices. The public key is the randomized parity check matrix. Signatures are created using the chain code decoding algorithm.
3) Security analysis shows the scheme is secure if the code length is over 1350 bits, preventing an attacker from determining the private key from the public information
SECURED WIRELESS COMMUNICATION THROUGH SIMULATED ANNEALING GUIDED TRAINGULARI...cscpconf
In this paper, simulated annealing guided traingularized encryption using multilayer perceptron generated session key (SATMLP) has been proposed for secured wireless ommunication. Bothsender and receiver station uses identical multilayer perceptron and depending on the final output of the both side multilayer perceptron, weights vector of hidden layer get tuned in bothends. After this tunning step both perceptrons generates identical weight vectors which is consider as an one time session key. In the 1st level of encryption process traingularized sub key1 is use to encrypt the plain text. In 2nd level of encryption simulated annealing method helps to generates sub key 2 for further encryption of 1st level generated traingularized encrypted text. Finally multilayer perceptron generated one time session key is used to perform
3rd level of encryption of 2nd level generated encrypted text. Recipient will use same identical multilayer perceptron guided session key for performing deciphering process for getting the simulated annealing guided intermediate cipher text. Then using sub key 2 deciphering technique is performed to get traingularized encrypted text. Finally sub key 1 is used to generate the plain text. In this proposed technique session key is not transmitted over public channel apart from few data transfers are needed for weight simulation process. Because after synchronization process both multilayer perceptron generates identical weight vector which acts as a session key. Parametric tests are done and results are compared in terms of ChiSquare test, response time in transmission with some existing classical techniques, which shows comparable results for the proposed system.
Secured wireless communication through simulated annealing guided traingulari...csandit
In this paper, simulated annealing guided traingularized encryption using multilayer perceptron
generated session key (SATMLP) has been proposed for secured wireless communication. Both
sender and receiver station uses identical multilayer perceptron and depending on the final
output of the both side multilayer perceptron, weights vector of hidden layer get tuned in both
ends. After this tunning step both perceptrons generates identical weight vectors which is
consider as an one time session key. In the 1st level of encryption process traingularized sub
key1 is use to encrypt the plain text. In 2nd level of encryption simulated annealing method
helps to generates sub key 2 for further encryption of 1st level generated traingularized
encrypted text. Finally multilayer perceptron generated one time session key is used to perform
3rd level of encryption of 2nd level generated encrypted text. Recipient will use same identical
multilayer perceptron guided session key for performing deciphering process for getting the
simulated annealing guided intermediate cipher text. Then using sub key 2 deciphering
technique is performed to get traingularized encrypted text. Finally sub key 1 is used to
generate the plain text. In this proposed technique session key is not transmitted over public
channel apart from few data transfers are needed for weight simulation process. Because after
synchronization process both multilayer perceptron generates identical weight vector which
acts as a session key. Parametric tests are done and results are compared in terms of Chi-
Square test, response time in transmission with some existing classical techniques, which shows
comparable results for the proposed system.
New Technique Using Multiple Symmetric keys for Multilevel EncryptionIJERA Editor
In a world of accelerating communications, cryptography has become an essential component of the modern
means of communication systems. The emergence of the webas a reliable medium for commerce and
communication has made cryptography an essential component. Many algorithms or ciphers are in use
nowadays. The quality of the cipher is judged byits ability to prevent an unrelated party fromknowingthe
original content of the encrypted message. The proposed “Multilevel Encryption Model” is a cryptosystem that
adopts the basic principles of cryptography. It uses five symmetric keys (multiple)
in floating point numbers, plaintext, substitution techniques and key combinations with unintelligible
sequence to produce the ciphertext. The decryption process is also designed to reproduce the plaintext
Implementation of RSA Algorithm with Chinese Remainder Theorem for Modulus N ...CSCJournals
Cryptography has several important aspects in supporting the security of the data, which
guarantees confidentiality, integrity and the guarantee of validity (authenticity) data. One of the
public-key cryptography is the RSA cryptography. The greater the size of the modulus n, it will be
increasingly difficult to factor the value of n. But the flaws in the RSA algorithm is the time
required in the decryption process is very long. Theorem used in this research is the Chinese
Remainder Theorem (CRT). The goal is to find out how much time it takes RSA-CRT on the size
of modulus n 1024 bits and 4096 bits to perform encryption and decryption process and its
implementation in Java programming. This implementation is intended as a means of proof of
tests performed and generate a cryptographic system with the name "RSA and RSA-CRT Text
Security". The results of the testing algorithm is RSA-CRT 1024 bits has a speed of
approximately 3 times faster in performing the decryption. In testing the algorithm RSA-CRT 4096
bits, the conclusion that the decryption process is also effective undertaken more rapidly.
However, the flaws in the key generation process and the RSA 4096 bits RSA-CRT is that the
time needed is longer to generate the keys.
Ecc cipher processor based on knapsack algorithmAlexander Decker
This document describes a method for encrypting messages using Elliptic Curve Cryptography (ECC) combined with the knapsack algorithm. It begins by explaining the basics of ECC, including defining elliptic curves over a finite field and describing point addition and doubling operations. It then presents algorithms for the full encryption/decryption process. The process involves first transforming the message into points on an elliptic curve, then applying the knapsack algorithm to further encrypt the ECC-encrypted message before transmission. Decryption reverses these steps to recover the original message. The combination of ECC and knapsack encryption is presented as an innovation that provides increased security over traditional ECC alone.
This document discusses a proposed design for secure military communications using AES encryption with Vedic mathematics, OFDM modulation, and QPSK. Specifically, it proposes using AES to encrypt data, applying Vedic math techniques to improve efficiency during the MixColumns step. The encrypted data would then be modulated using OFDM and QPSK to provide high throughput communication. Key aspects of the design include AES encryption/decryption, OFDM using QPSK and an IFFT/FFT, and applying Vedic math during AES encryption to reduce complexity and power consumption for military applications.
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Multiple Encryption using ECC and Its Time Complexity AnalysisIJCERT
Rapid growth of information technology in present era, secure communication, strong data encryption technique and trusted third party are considered to be major topics of study. Robust encryption algorithm development to secure sensitive data is of great significance among researchers at present. The conventional methods of encryption used as of today may not sufficient and therefore new ideas for the purpose are to be design, analyze and need to be fit into the existing system of security to provide protection of our data from unauthorized access. An effective encryption/ decryption algorithm design to enhance data security is a challenging task while computation, complexity, robustness etc. are concerned. The multiple encryption technique is a process of applying encryption over a single encryption process in a number of iteration. Elliptic Curve Cryptography (ECC) is well known and well accepted cryptographic algorithm and used in many application as of today. In this paper, we discuss multiple encryptions and analyze the computation overhead in the process and study the feasibility of practical application. In the process we use ECC as a multiple-ECC algorithm and try to analyze degree of security, encryption/decryption computation time and complexity of the algorithm. Performance measure of the algorithm is evaluated by analyzing encryption time as well as decryption time in single ECC as well as multiple-ECC are compared with the help of various examples.
This document proposes an efficient and secure cryptography technique using unimodular matrices. It aims to improve data security during transmission by encrypting messages. The proposed method encrypts data into a matrix form using ASCII values, then multiplies it by an encoding matrix generated from an Armstrong number. At the receiver end, the cipher text matrix is decrypted by multiplying it by the inverse of the encoding matrix. This overcomes limitations of previous techniques by allowing any length messages and not relying on Armstrong numbers containing zeros. The method ensures confidentiality, access control and non-repudiation through the use of encryption and decryption with the same secret key.
Novel Algorithm For Encryption:Hybrid of Transposition and Substitution MethodIDES Editor
This paper proposes a novel encryption algorithm that is a hybrid of transposition and substitution methods. The algorithm encrypts messages without using an external key, as the key is derived from characteristics of the original message itself. This solves the problem of securely exchanging keys. Both transposition and substitution have limitations individually, so the hybrid approach results in a more secure cipher. The encryption process involves converting characters to ASCII codes, grouping characters, and reversing the order within groups. Decryption reverses these steps to retrieve the original plaintext. The algorithm aims to provide strong security without relying on external keys.
Similar to One to many (new scheme for symmetric cryptography) (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
One to many (new scheme for symmetric cryptography)
1. TELKOMNIKA Telecommunication, Computing, Electronics and Control
Vol. 21, No. 4, August 2023, pp. 762∼770
ISSN: 1693-6930, DOI: 10.12928/TELKOMNIKA.v21i4.24789 ❒ 762
One to many (new scheme for symmetric cryptography)
Alz Danny Wowor1
, Bambang Susanto2
1Department of Informatics Engineering, Faculty of Information Technology, Satya Wacana Christian University, Salatiga, Indonesia
2Department of Mathematics, Faculty of Science and Mathematics, Satya Wacana Christian University, Salatiga, Indonesia
Article Info
Article history:
Received Jun 19, 2020
Revised Oct 07, 2020
Accepted Feb 08, 2021
Keywords:
Cryptanalyst
One-to-many scheme
Symmetric cryptography
ABSTRACT
Symmetric cryptography will always produce the same ciphertext if the plaintext and the
given key are the same repeatedly. This condition will make it easier for cryptanalysts
to perform cryptanalysis. This research introduces a one-to-many cryptography scheme,
which can produce different ciphertexts even if the input given is the same repeatedly.
The one-to-many encryption scheme can produce several ciphertexts with differences of
up to 50%. The avalanche effect test obtained an average of 52.20%, better than mod-
ern cryptography Blowfish by 25.46% and 6% better than advanced encryption standard
(AES). One-to-many can produce different n-ciphertexts, which will certainly make it
more difficult for cryptanalysts to perform cryptanalysis and require n-times longer to
break than other symmetric cryptography.
This is an open access article under the CC BY-SA license.
Corresponding Author:
Alz Danny Wowor
Department of Informatics Engineering, Faculty of Information Technology, Satya Wacana Christian University
Jl. Notohamidjojo 1-10, Blotongan, Salatiga, Indonesia
Email: alzdanny.wowor@uksw.edu
1. INTRODUCTION
The need for information security becomes an extraordinary transformation in the field of cryptography.
Cryptographers create algorithms to secure information by emphasizing the strength of mathematics in taking
advantage of the complexity of algorithms and key secrecy. The more complex the algorithm design, the better the
information security. On the other hand, the speed of the process in algorithm is also taken into account. This is
the focus among cryptographers, namely how to create a powerful algorithm against the attack of cryptanalysts yet
has a fast processing time.
Symmetric cryptography is still an interesting study and is still very feasible to study [1]. One of them is
the advanced encryption standard (AES) has become a popular criterion and is widely used commercially. There
are several studies [2]-[14] that use algorithms recommended by the National Institute of Standards and Technology
(NIST) as the information security, although several studies have been claimed to break the algorithm [15]-[20].
Today’s advancement of computer technology also has created a challenge for the development of cryp-
tography. The idea of quantum computation can enhance various cryptanalysis techniques because quantum me-
chanical system calculation will produce a much less time complexity than that of classical calculation model [21],
[22]. Computational advancement certainly encourages cryptographers to review the cryptographic schemes that
will be used.
There a lot of methods that can be done to make an algorithm to be more resistant to attacks. Henriques
and Vernekar [23] combined symmetric and asymmetric algorithms to secure communication between devices in
an IoT system. In [24] proposed the use of acoustic echo cancellation (AEC) and elliptic curve cryptography
(ECC) encryption techniques with a combination of symmetric and asymmetric algorithms. In [25] modified shift
row and mix column in AES, so that the processing time can be faster. In [26] developed a cryptographic technique
Journal homepage: Journal homepage: http://telkomnika.uad.ac.id
2. TELKOMNIKA Telecommun Comput El Control ❒ 763
based on quantum key distribution (QKD). In [27] designed and implemented a real-time cryptographic algorithm
to secure medical devices that provide a high level of security for remote health monitoring systems.
The existing symmetrical cryptography is the type of one-to-one encryption scheme, which means that
every input plaintext and key always produces the same ciphertext every time a process is carried out. In solving
of cryptography, a criptanalyst will certainly try to look for patterns from percipext and makes an analysis of keys
and plaintexts. The existing cryptography algorithms sometimes at certain parts are very vulnerable to extreme
testing and as a result the patterns in ciphertext will be easier to be found.
This research designs a new scheme that can answer that problem, which is the one-to-many scheme.
This scheme will create different ciphertext even though the input plaintext and keys are always the same. Any
changes that occur in ciphertext will take longer time and certainly will also make it difficult for cryptanalysts to
see the relationship between plaintext and ciphertext. Thus, this scheme certainly is more secure than one-to-one
encryption scheme. The initial design of this scheme is more on how to get several balanced ciphertext than on the
complexity of algorithm in the encryption-decryption process.
2. PROPOSED RESEARCH
2.1. One to many scheme
One-to-many is an encryption scheme that can create different chipertexts, although using the same plain-
text and key, and decryption process that can retrieve the plaintext. Etymologically, one-to-many consists of two
words, “one” is a plaintext, and “many” is a process generating a different ciphertext for each time it is encrypted.
EK(pa) = cai
(1)
The one-to-many scheme encryption is described in (1). The encryption (Ek) uses plaintext (pa) and key
as inputs, which can create ciphertext cai , i ∈ Z+
. All encryption processes can create different cai although with
the same input.
DK(cai ) = pa (2)
In contrast, the process of DK decryption function in (2) uses a key and ciphertext as inputs. A one-to-
many scheme should be able to return each different ciphertext (cai
) resulting from the encryption process to be
a plaintext. This study uses a random number generator to show that the one-to-many scheme can be applied in
cryptographic processes as shown in Figure 1.
2.2. Implementation of one-to-many scheme
The design of one-to-many algorithm scheme is shown in Figure 1. The plaintext input (P) and key
(K) are in the form of text. K is processed into seed (x0) in cryptographically secure pseudo random number
generator (CSPRNG) chaos which will produce n-number of random numbers sequence {A1, A2, · · · , An}. The
chose key process scheme will manage and select the compatible key for the encryption-decryption process so that
it produces the appropriate plaintext.
K
Examination
Scheme
F(x0)
Chose Key Scheme
Ai,
i ∈ {1, 2, · · · , n}
A∗
i ⊕ P C
P
x0 A∗
i
Figure 1. General scheme of one to many encryption
K = {k1, k2, · · · , kn} is the symbol of the key and P = {p1, p2, · · · , pn} is the symbol of plaintext.
Each ki ∈ K plays role as input to the examination scheme [28], and produces x0 as the seed for the CSPRNG
chaos function (F).
F(x0) = {A1, A2, · · · An} (3)
Each Ai, i = 1, 2, · · · , n is the data set of CSPRNG chaos random numbers, where A1 = {a11, a12, · · · , a1k},
A2 = {a21, a22, · · · , a2k}, · · · , An = {an1, an2, · · · , ank}. From (3), using the chosen key A∗
n from the key
selection scheme, the encryption process is obtained as:
EK → P ⊕ A∗
i = {c1 = p1 ⊕ a∗
1r, c2 = p2 ⊕ a∗
2r, · · · , cn = pn ⊕ a∗
nr} (4)
One to many (new scheme for symmetric cryptography) (Alz Danny Wowor)
3. 764 ❒ ISSN: 1693-6930
In the general, the following equation is obtion:
ci = pi ⊕ a∗
ir (5)
The decryption process can also be done by generating each random number using key input as well.
Let K = {k1, k2, · · · , kn} as the key, and ciphertext C = {c1, c2, · · · , cn}. Each random number from
CSPRNG chaos is obtained from F(K) = {A1, A2, · · · An}. The one-to-many scheme will select A∗
i as the
chosen key for the decryption process.
DK → C ⊕ A∗
i = {p1 = c1 ⊕ a∗
1r, p2 = c2 ⊕ a∗
2r, · · · , pn = cn ⊕ a∗
nr} (6)
So that plaintext will be recovered as a decryption process.
pi = ci ⊕ a∗
ir (7)
2.3. Key generation
The key generation starts from the examination process [28], each key input (ki) and the index constant
(bi), where ki, bi ∈ Z256
for i = 1, 2, · · · , 8 which is used to find the ratio x0 = pa/qa, where pa and qa are
obtained from (8) and (9).
pa = (k1 + k2 + k3 + k4 + k5 + k6)/6 =
6
X
i=1
ki/6 (8)
qa = k1b1 + k2b2 + k3b3 + k4b4 + k5b5 + k6b6 + k7b7 + k8b8 =
8
X
i=1
kibi (9)
The value of x0 is used as the seed in (10) that generates a CSPRNG chaos-based number serial. This
process is in accordance with the search function F(x0) as shown in Figure 1. The random number will be used
as a key in the encryption or decryption process.
xn+1 = txn(1 − xn) (10)
Each chaotic number is in decimal form and cannot be used as a key, so it is converted to an integer using
the truncation function in (11).
T(x, size) = ∥x × 10count
∥, x ̸= 0 (11)
The “chose key scheme” is the most important process, because here algorithm will choose the key avail-
ability and set which key will be selected as the chosen key A∗
n to be used in the encryption process in (4) or
decryption in (6). The designed algorithm can produce several different and balanced key sets {A1, A2, · · · , An}
as shown earlier in (3). After selecting a set which will be used as the key, algorithm will provide a marker that is
included in the ciphertext, so that in the process of decryption, algorithm can recognize the marker and will refer
to the set of keys used.
3. RESULT AND ANALYSIS
3.1. Test of the key changes
Test of key changes in one-to-many scheme is carried out to see how well an algorithm produces ciphertext
if there is a small change (1-bit change) in the key. The plaintext “satya wacana” and two keys “fti uksw” and “ftj
uksw” are selected, where the change from the letter i to j is a 1-bit difference. This test also uses the correlation
statistical test and the mean difference, to see the difference between each ciphertext produced.
TELKOMNIKA Telecommun Comput El Control, Vol. 21, No. 4, August 2023: 762–770
4. TELKOMNIKA Telecommun Comput El Control ❒ 765
3. RESULT AND ANALYSIS
3.1. Test of The Key Changes
Test of key changes in one-to-many scheme is carried out to see how well an algorithm produces ciphertext
if there is a small change (1-bit change) in the key. The plaintext “satya wacana” and two keys “fti uksw” and “ftj
uksw” are selected, where the change from the letter i to j is a 1-bit difference. This test also uses the correlation
statistical test and the mean difference, to see the difference between each ciphertext produced.
i xi bil-1 bil-2 bil-3 bil-4 bil-5
1 0.099998932494109 099 998 932 494 109
2 0.359996583976590 359 996 583 976 590
3 0.921596174007104 921 596 174 007 104
4 0.289026664250287 289 026 664 250 287
5 0.821961006410555 821 961 006 410 555
6 0.585364441404410 585 364 441 404 410
7 0.970851648574852 970 851 648 574 852
(a)
0 100 200
0
0.5
1
i
x
i
(b)
Figure 2. Logistic function iteration with “fti uksw” key. (a) Data of the first 7 iteration, (b) The first 200 Iteration
On to Many Encryption (Alz Danny Wowor)
Figure 2. Logistic function iteration with “fti uksw” key: (a) data of the first 7 iteration and (b) the first 200
iteration
The first test uses the key “fti uksw”, and the index value B = 1, 2, · · · , 8 is selected. Based on (8) and
(9), x0 = 0.0256580696623239 is obtained. To generate a random value of CSPRNG chaos, value t = 4 is used
in (10). The results of the first 7 iteration values are shown in Figure 2(a) and the results of the first 200 iterations
are shown in n Figure 2(b). In cryptography, keys must be in integer number. Therefore, the result of iteration is
divided to be five integers, i.e., set-i, {1, 2, 3, 4, 5} respectively. Each set-i is Ai in (3), which is used as the key for
encryption-decryption processes in (5) and (7).
The scatter plot in Figure 2(b) is a visualization of the first 200 iterations, which proves that the logistic
function with the key input “fti uksw” can generate random numbers. Based on the table in Figure 2(a), there are
five datasets that can be used as keys. Using (3), we obtain A1 = set-1, A2 = set-2, · · · , A5 = set-5.
Table 1. Key test “fti uksw”
i-th
Ciphertext-i
Value Category Mean
encryptions correlation correlation difference
1 D6C80D9A966941D2F422035C0720232E20 0.393295 Fair 62.06
2 5945C893228CCA237235F29822B3B46C21 0.086143 Very low 68.69
3 17A8221167D9FFE5653E3A00D8F456B922 −0.515177 Fair 99.81
4 61317B73FBB4B5EAD47BFB9FAF479B2523 0.232599 Low 77.00
5 E0AFDC998EB4CFCA719639F391581FC824 0.343136 Low 82.06
Based on (4), the key selection process can be carried out. The selection scheme will choose the key that
will be used in the encryption process. Table 1 shows the result of five encryptions using the key “fti uksw”. These
results indicate that one-to-many scheme can produce different ciphertext for each encryption process. The differ-
ence between each ciphertext and plaintext can be seen using the correlation value, which respectively there are
two ciphertexts that have relationship with plaintexts in moderate and low categories, and one ciphertext that has a
relationship with the very low category.
Mean difference is the average absolute difference between plaintext and ciphertext. The greater the
distance between plaintext and ciphertext, the better it will be. The results obtained in Table 1 show the mean
difference value of different ciphertexts. These results provide information that one-to-many scheme can create
different ciphertext for each encryption. Thus, cryptanalysts will need more work to be able to solve this algorithm.
4 r ISSN: 1693-6930
Test of key changes in one-to-many scheme is carried out to see how well an algorithm produces ciphertext
if there is a small change (1-bit change) in the key. The plaintext “satya wacana” and two keys “fti uksw” and “ftj
uksw” are selected, where the change from the letter i to j is a 1-bit difference. This test also uses the correlation
statistical test and the mean difference, to see the difference between each ciphertext produced.
i xi bil-1 bil-2 bil-3 bil-4 bil-5
1 0.099998932494109 099 998 932 494 109
2 0.359996583976590 359 996 583 976 590
3 0.921596174007104 921 596 174 007 104
4 0.289026664250287 289 026 664 250 287
5 0.821961006410555 821 961 006 410 555
6 0.585364441404410 585 364 441 404 410
7 0.970851648574852 970 851 648 574 852
(a)
0 100 200
0
0.5
1
i
x
i
(b)
Figure 2. Logistic function iteration with “fti uksw” key. (a) Data of the first 7 iteration, (b) The first 200 Iteration
On to Many Encryption (Alz Danny Wowor)
Figure 3. Logistic function iteration with “fti uksw” key. (a) Data of the first 7 iteration, (b) The first 200 Iteration
The first test uses the key “fti uksw”, and the index value B = 1, 2, · · · , 8 is selected. Based on Equation
8 and Equation 9, x0 = 0.0256580696623239 is obtained. To generate a random value of CSPRNG chaos, value
t = 4 is used in Equation 10. The results of the first 7 iteration values are shown in Figure 3(a) and the results of
the first 200 iterations are shown in n Figure 3(b). In cryptography, keys must be in integer number. Therefore, the
result of iteration is divided to be five integers, i.e. set-i, {1, 2, 3, 4, 5} respectively. Each set-i is Ai in Equation 3,
which is used as the key for encryption-decryption processes in Equation 5 and Equation 7.
The scatter plot in Figure 3(b) is a visualization of the first 200 iterations, which proves that the logistic
function with the key input “fti uksw” can generate random numbers. Based on the table in Figure 3(a), there are
five datasets that can be used as keys. Using Equation 3, we obtain A1 = set-1, A2 = set-2, · · · , A5 = set-5.
Table 1. Key Test “fti uksw”
i-th
Ciphertext-i
Value Category Mean
encryptions Correlation Correlation Difference
1 D6C80D9A966941D2F422035C0720232E20 0.393295 Fair 62.06
2 5945C893228CCA237235F29822B3B46C21 0.086143 Very Low 68.69
3 17A8221167D9FFE5653E3A00D8F456B922 0.515177 Fair 99.81
4 61317B73FBB4B5EAD47BFB9FAF479B2523 0.232599 Low 77.00
5 E0AFDC998EB4CFCA719639F391581FC824 0.343136 Low 82.06
Based on Equation 4, the key selection process can be carried out. The selection scheme will choose
the key that will be used in the encryption process. Table 1 shows the result of five encryptions using the key
“fti uksw”. These results indicate that one-to-many scheme can produce different ciphertext for each encryption
process. The difference between each ciphertext and plaintext can be seen using the correlation value, which
respectively there are two ciphertexts that have relationship with plaintexts in moderate and low categories, and
one ciphertext that has a relationship with the very low category.
Mean Difference is the average absolute difference between plaintext and ciphertext. The greater the
distance between plaintext and ciphertext, the better it will be. The results obtained in Table 1 show the mean
difference value of different ciphertexts. These results provide information that one-to-many scheme can create
different ciphertext for each encryption. Thus, cryptanalysts will need more work to be able to solve this algorithm.
0 2 4 6 8 10 12 14 16
0
100
200
Index of character
ASCII
equivalent
value
plaintext
ciphertext 1
ciphertext 2
ciphertext 3
ciphertext 4
ciphertext 5
Figure 4. Comparison plot of plaintext and ciphertext for key “fti uksw”
Visualization of each ciphertext of the key “fti uksw” in Cartesian coordinates in Figure 5 shows a sequence
of different numbers and do not form a pattern that facilitates direct guessing. It appears that each ciphertext ob-
tained is different although with the same input. This result indicates that one-to-many scheme can make plaintext
and ciphertext that do not have a strong relationship statistically.
TELKOMNIKA Telecommun Comput El Control, Vol. x, No. x, February 202x : xx – xx
Figure 3. Comparison plot of plaintext and ciphertext for key “fti uksw”
Visualization of each ciphertext of the key “fti uksw” in Cartesian coordinates in Figure 3 shows a se-
quence of different numbers and do not form a pattern that facilitates direct guessing. It appears that each ciphertext
One to many (new scheme for symmetric cryptography) (Alz Danny Wowor)
5. 766 ❒ ISSN: 1693-6930
obtained is different although with the same input. This result indicates that one-to-many scheme can make plain-
text and ciphertext that do not have a strong relationship statistically.
0 2 4 6 8 10 12 14 16
Index of character
Figure 4. Comparison plot of plaintext and ciphertext for key “fti uksw”
Visualization of each ciphertext of the key “fti uksw” in Cartesian coordinates in Figure 5 shows a sequence
of different numbers and do not form a pattern that facilitates direct guessing. It appears that each ciphertext ob-
tained is different although with the same input. This result indicates that one-to-many scheme can make plaintext
and ciphertext that do not have a strong relationship statistically.
TELKOMNIKA Telecommun Comput El Control, Vol. x, No. x, February 202x : xx – xx
Figure 3. Comparison plot of plaintext and ciphertext for key “fti uksw”
Visualization of each ciphertext of the key “fti uksw” in Cartesian coordinates in Figure 4 shows a sequence
of different numbers and do not form a pattern that facilitates direct guessing. It appears that each ciphertext ob-
tained is different although with the same input. This result indicates that one-to-many scheme can make plaintext
and ciphertext that do not have a strong relationship statistically.
i xi set-1 set-2 set-3 set-4 set-5
1 0.100089355076193 100 089 355 076 193
2 0.360285904306499 360 285 904 306 499
3 0.921919885858189 921 919 885 858 189
4 0.287934439669651 287 934 439 669 651
5 0.820112792487100 820 112 792 487 100
6 0.590111200344443 590 111 200 344 443
7 0.967519886289935 967 519 886 289 935
(a)
0 100 200
0
0.5
1
i
x
i
(b)
Figure 4. Logistic function iteration with “ftj uksw” key. (a) Data of the first 7 iteration, (b) The first 200 Iteration
TELKOMNIKA Telecommun Comput El Control, Vol. x, No. x, February 202x : xx – xx
Figure 4. Logistic function iteration with “ftj uksw” key: (a) data of the first 7 iteration and (b) the first 200
iteration
The second test using the key “ftj uksw”, and using the same index value B = 1, 2, · · · , 8, x0 =
0.0256818986893376 is obtained. The results of the first 7 iteration values are shown in Figure 4(a) and the
results of the first 200 iterations are shown in n Figure 4(b). Taking the value of t = 4, the iteration results are
obtained as shown in Figure 4(a). Five datasets A1 = set-1, A2 = set-2, · · · , A5 = set-5 are used as keys in the
encryption-decryption processes.
The iteration result of the logistic function with the “ftj uksw” key is very different from the “fti uksw” as
shown in Figure 2(b) and Figure 4(b). These results indicate that the examination algorithm is designed to be able
to produce a unique initialization value of x0, so it is very sensitive towards small (1-bit) changes in the input. The
nature of the butterfly effect, which has a large effect on output even though it is a small change in input, also has a
significant effect on the encryption results. This can be seen in the differences of each ciphertext as shown in Table
1 and Table 2.
Table 2. Key test “ftj uksw”
i-th
Ciphertext-i
Value Category Mean
encryptions correlation correlation difference
1 D7C90D98956E3EDE1A3AA739C787E0FD20 −0.39498 Fair 96.38
2 CC7E0B1FD18F7E1DBAF874AB3EC9CC1F21 −0.06211 Very low 81.31
3 D6E9E93079E8EDD0EE9C26BCAB49093222 0.342195 Low 91.25
4 BF93CE1648789816ADBBB0B59A89119323 0.182852 Very low 75.44
5 34533007CBCC284C4A65C235F807F6DC24 −0.58524 Fair 89.81
Table 2 shows the test results using the key “ftj uksw”, which it indicates that a one-to-many scheme can
produce five different ciphertexts at each encryption process. Statistically, the resulting correlation value is close
to 0 with sufficient and very low categories. It indicates that the algorithm in making of plaintext and ciphertext
has no statistical relationship. The visualization in Figure 5 also shows that each ciphertext produced is different
although with the same input. Test of 1-bit key difference shows that one-to-many scheme can produce different
ciphertext, even though given key inputs with a small different. The visualization of the graphic shows a different
ciphertext which it produces a different sequence of numbers and does not form a pattern that facilitates direct
guessing, and as a result, cryptanalysts will need more work to solve this algorithm.
TELKOMNIKA Telecommun Comput El Control r 5
i xi set-1 set-2 set-3 set-4 set-5
1 0.100089355076193 100 089 355 076 193
2 0.360285904306499 360 285 904 306 499
3 0.921919885858189 921 919 885 858 189
4 0.287934439669651 287 934 439 669 651
5 0.820112792487100 820 112 792 487 100
6 0.590111200344443 590 111 200 344 443
7 0.967519886289935 967 519 886 289 935
(a)
0 100 200
0
0.5
1
i
x
i
(b)
Figure 5. Logistic function iteration with “ftj uksw” key. (a) Data of the first 7 iteration, (b) The first 200 Iteration
The second test using the key “ftj uksw”, and using the same index value B = 1, 2, · · · , 8, x0 =
0.0256818986893376 is obtained. The results of the first 7 iteration values are shown in Figure 5(a) and the
results of the first 200 iterations are shown in n Figure 5(b). Taking the value of t = 4, the iteration results are
obtained as shown in Figure 5(a). Five datasets A1 = set-1, A2 = set-2, · · · , A5 = set-5 are used as keys in the
encryption-decryption processes.
The iteration result of the logistic function with the “ftj uksw” key is very different from the “fti uksw” as
shown in Figure 3(b) and Figure 5(b). These results indicate that the examination algorithm is designed to be able
to produce a unique initialization value of x0, so it is very sensitive towards small (1-bit) changes in the input. The
nature of the butterfly effect, which has a large effect on output even though it is a small change in input, also has a
significant effect on the encryption results. This can be seen in the differences of each ciphertext as shown in Table
1 and Table 2.
Table 2. Key Test “ftj uksw”
i-th
Ciphertext-i
Value Category Mean
encryptions Correlation Correlation Difference
1 D7C90D98956E3EDE1A3AA739C787E0FD20 0.39498 Fair 96.38
2 CC7E0B1FD18F7E1DBAF874AB3EC9CC1F21 0.06211 Very Low 81.31
3 D6E9E93079E8EDD0EE9C26BCAB49093222 0.342195 Low 91.25
4 BF93CE1648789816ADBBB0B59A89119323 0.182852 Very Low 75.44
5 34533007CBCC284C4A65C235F807F6DC24 0.58524 Fair 89.81
Table 2 shows the test results using the key “ftj uksw”, which it indicates that a one-to-many scheme can
produce five different ciphertexts at each encryption process. Statistically, the resulting correlation value is close
to 0 with sufficient and very low categories. It indicates that the algorithm in making of plaintext and ciphertext
has no statistical relationship. The visualization in Figure 6 also shows that each ciphertext produced is different
although with the same input.
0 2 4 6 8 10 12 14 16
0
100
200
Index of character
ASCII
equivalent
value
plaintext
ciphertext 1
ciphertext 2
ciphertext 3
ciphertext 4
ciphertext 5
Figure 6. Comparison plot of plaintext and ciphertext for key “ftj uksw”
Test of 1-bit key difference shows that one-to-many scheme can produce different ciphertext, even though
given key inputs with a small different. The visualization of the graphic shows a different ciphertext which it
produces a different sequence of numbers and does not form a pattern that facilitates direct guessing, and as a
result, cryptanalysts will need more work to solve this algorithm.
3.2. Bit Differences On Each Ciphertext
Figure 5. Comparison plot of plaintext and ciphertext for key “ftj uksw”
TELKOMNIKA Telecommun Comput El Control, Vol. 21, No. 4, August 2023: 762–770
6. TELKOMNIKA Telecommun Comput El Control ❒ 767
3.2. Bit differences on each ciphertext
The next test is to see the bit difference on each ciphertext. The bigger the bit difference for each ci-
phertext, the better it is. The one-to-many scheme produces five ciphertexts, so that the number of comparisons
for each ciphertext is C5
2 = 10. Table 3 shows the comparison of ciphertexts for each key. With m, n to be the
selected ciphertext, m = n ∈ {1, 2, 3, 4, 5}. The symbol “><” denotes the comparison of each ciphertext.
Table 3. Bit differences on each ciphertext
No Ciphertext m >< Ciphertext n “fti uksw” “ftj uksw”
1 1 >< 2 63 72
2 1 >< 3 69 60
3 1 >< 4 72 66
4 1 >< 5 64 61
5 2 >< 3 66 62
6 2 >< 4 65 64
7 2 >< 5 65 67
8 3 >< 4 69 54
9 3 >< 5 65 73
10 4 >< 5 66 67
The average change of bit ciphertext for the key “fti uksw” is 66.4 bits or 51.87%. Meanwhile, for the key
“ftj uksw” an average of 64.6 bits (50.47%) is obtained. The minimum score for the overall data is 54(42.1875%),
and the maximum score is 73(57.03%), and overall, the mean ciphertext difference is 50%. These results indicate
that one-to-many scheme can produce different ciphertext and balanced ciphertext.
3.3. Differences test plaintext-ciphertext
This test is done to see whether the change in 1 bit can make a difference in the overall ciphertext pro-
duced. The greater the distance between plaintext and ciphertext indicates the algorithm that is designed is also
getting better. Conversely, the smaller the difference between the plaintext and the ciphertext produced the less
well the algorithm designed. The absolute difference between plaintext and ciphertext is shown in Table 4 using
key “fti uksw” and “ftj uksw”.
Table 4. Differences ciphertext-plaintext by key
Char-i
“fti uksw” “ftj uksw”
Cp 1 Cp 2 Cp 3 Cp 4 Cp 5 Cp 1 Cp 2 Cp 3 Cp 4 Cp 5
1 99 26 92 18 109 100 89 99 76 63
2 103 28 71 48 78 104 29 136 50 14
3 103 84 82 7 104 103 105 117 90 68
4 33 26 104 6 32 31 90 73 99 114
5 53 63 6 154 45 52 112 24 25 106
6 73 108 185 148 148 78 111 200 88 172
7 54 83 136 62 88 57 7 118 33 79
8 113 62 132 137 105 125 68 111 75 21
9 145 15 2 113 14 73 87 139 74 25
10 63 44 35 26 53 39 151 59 90 4
11 107 132 52 141 53 57 6 72 66 84
12 5 55 97 62 146 40 74 91 84 44
13 25 2 184 143 113 167 30 139 122 216
14 0 147 212 39 56 103 169 41 105 25
15 3 148 54 123 1 192 172 23 15 214
16 14 76 153 5 168 221 1 18 115 188
It can be seen from both tables that the plaintext-ciphertext distance produced by the algorithm design is
quite good. The difference of 1 bit in the key, does not make the difference in the distance in each ciphertext from
the two tables also differ by 1 bit. The results shown are quite varied and spread well. However, further testing
needs to be done to ensure that the spread of distance that occurs is already good. This becomes a note in the next
study to be able to test the plaintext-ciphertext difference with more key variations.
3.4. Avalanche effect test
Comparison of one-to-many scheme with several other cryptographs is done by observing the avalanche
effect value in carrying out the encryption process. In [29] has tested several classical and modern cryptographic
One to many (new scheme for symmetric cryptography) (Alz Danny Wowor)
7. 768 ❒ ISSN: 1693-6930
algorithms while this research continues its study with testing on AES and also one one-to-many scheme. Table 5
shows the result of the avalanche effect test for five encryption processes using the same key and plaintext. One-
to-many scheme can produce different avalanche effect value for each encryption, different from other algorithms
that always produce the same value.
Table 5. Result avalanche effect test
No Algorithm
The avalanche effect (%)
Test-1 Test-2 Test-3 Test-4 Test-5
1 Caesar cipher 01.56 01.56 01.56 01.56 01.56
2 Playfair cipher 06.25 06.25 06.25 06.25 06.25
3 Vigenere cipher 03.13 03.13 03.13 03.13 03.13
4 One-to-many 54.29 47.79 49.26 51.47 52.94
5 S-box fleksibel 49.75 49.75 49.75 49.75 49.75
6 DES 55.68 55.68 55.68 55.68 55.68
7 AES 45.58 45.58 45.58 45.58 45.58
The data encryption standard (DES) algorithm has the highest avalanche effect value and the lowest
Caesar cipher value. These results are reasonable because both DES and AES use all block cipher principles
in compiling their algorithms, while Caesar cipher only uses transposition for each text input. The one-to-many
scheme also does not use block cipher principle because the randomness of the generated keys use the logistical
function is a determining factor in the encryption-decryption processes. On average, one-to-many scheme can
produce an avalanche effect value of 52.20%. Although, it is still lower than DES, but it is still better than modern
Blowfish cryptography by 25.46% and 6% more than AES, both of which are still widely used as information
security algorithms.
3.5. Time requirement of cryptanalysis
Suppose an algorithm uses a key block length of 128 bits, the time needed to perform cryptanalysis is
2128
units. If a computer can crack 106
keys in one second, the time needed to crack the key is 5.4 × 1024
years.
As the first step, the one-to-many scheme also uses 128 bits which its algorithm can produce several ciphertexts.
In general, suppose that one-to-many scheme can produce different n-ciphertexts, the ciphertext search process
requires n × 5.4 × 1024
years. This result will certainly complicate the cryptanalysis. Although the possibility
of performing the cryptanalysis still remains but it will take n-times process longer time than cracking ordinary
cryptography.
4. CONCLUSION
The one-to-many encryption scheme produces five ciphertexts that have different and balanced visualiza-
tion. The plaintext-ciphertext correlation test is also close to zero. This condition illustrates that the algorithm can
make plaintext that is not statistically related to ciphertext. A plaintext input will produce several ciphertexts, and
each of them has a variety of more than 50%. An average of 52.20% is obtained from the avalanche effect test,
which is better than modern Blowfish cryptography by 25.46% and AES by 6%, both of which are still widely used
as information security algorithms. This condition will be preferable because it makes difficult for cryptanalysts to
find the relationship between plaintext and ciphertext directly by a process of trial and error. Many different forms
of ciphertext produced by one-to-many scheme will make it very difficult for performing a cryptanalysis. Although
the cryptanalysis process can still be done but it will require a longer process and time compared to algorithms that
are solved ordinarily. If there are n ciphertexts, it will take n-times for time and process to find the plaintext.
ACKNOWLEDGEMENT
The authors are grateful to Satya Wacana Christian University Research and Community Service Center
(BP3M) for the research funding support through the Internal Fundamental Research scheme in the 2017/2018
fiscal year.
REFERENCES
[1] A. Biryukov, J. Daemen, S. Lucks, and S. Vaudenay, “Topics and Research Directions for Symmetric Cryptography,” Early Symmetric
Crypto Workshop, 2017. [Online]. Available: https://orbilu.uni.lu/handle/10993/30953
TELKOMNIKA Telecommun Comput El Control, Vol. 21, No. 4, August 2023: 762–770
8. TELKOMNIKA Telecommun Comput El Control ❒ 769
[2] M. Yang, B. Xiao, and Q. Meng, “New AES Dual Ciphers Based on Rotation of Columns,” Wuhan University Journal of Natural
Sciences, vol. 24, pp. 93-97, 2019, doi: 10.1007/s11859-019-1373-y.
[3] A. Arab, M. J. Rostami, and B. Ghavami, “An image encryption method based on chaos system and AES Algorithm,” The Journal of
Supercomputing, vol. 75, pp. 6663-6682, 2019, doi: 10.1007/s11227-019-02878-7.
[4] A. A. Thinn and M. M. S. Thwin, “Modification of AES Algorithm by Using Second Key and Modified SubBytes Operation for Text
Encryption,” Computational Science and Technology part of Lecture Notes in Electrical Engineering, 2018, vol. 481, pp. 435-444, doi:
10.1007/978-981-13-2622-6 42.
[5] C. R. Dongarsane, D. Maheshkumar, and S. V. Sankpal , “Performance Analysis of AES Implementation on a Wireless Sensor Network,”
Techno-Societal , 2019, pp. 87-93, doi: 10.1007/978-3-030-16848-3 9.
[6] C. Ashokkumar, B. Roy, M. B. S. Venkatesh, and B. L. Menezes, “S-Box Implementation of AES Is Not Side Channel Resistant,”
Journal of Hardware and Systems Security, 2019. [Online]. Available: https://eprint.iacr.org/2018/1002.pdf
[7] T. Manojkumar, P. Karthigaikumar, and V. Ramachandran, “An Optimized S-Box Circuit for High Speed AES Design with Enhanced
PPRM Architecture to Secure Mammographic Images,” Journal of Medical Systems, vol. 43, no. 31, 2019, doi: 10.1007/s10916-018-
1145-9.
[8] S. D. Putra, M. Yudhiprawira, S. Sutikno, Y. Kurniawan, and A. S. Ahmad, “Power Analysis Attack Against Encryption Devices:
A Comprehensive Analysis of AES, DES, and BC3,” TELKOMNIKA, vol. 17, no. 3, pp. 2182-1289, 2019, doi: 10.12928/telkom-
nika.v17i3.9384.
[9] C. A. Sari, G. A. D. R. I. M. Setiadi, and E. H. Rachmawanto, “An improved security and message capacity using AES and Huffman
Coding on Image Steganography,” TELKOMNIKA, vol. 17, no. 5, pp. 2400-2409, 2019, doi: 10.12928/telkomnika.v17i5.9570.
[10] G. C. Prasetyadi, R. Refianti, and A. B. Mutiara, “File Encryption and Hiding Application Based on AES and Append Insertion Steganog-
raphy,” TELKOMNIKA, vol. 16, no. 1, pp. 361-367, 2018, doi: 10.12928/telkomnika.v16i1.6409.
[11] R. Srividyaand B. Ramesh, “Implementation of AES using Biometric,” International Journal of Electrical and Computer Engineering
(IJECE), vol. 9, no. 5, pp. 4266-4276, 2019, doi: 10.11591/ijece.v9i5.pp4266-4276.
[12] J. M. Espalmado and E. Arboleda, “DARE Algorithm: A New Security Protocol by Integration of Different Cryptographic
Techniques,” International Journal of Electrical and Computer Engineering (IJECE), vol. 7, no. 2, pp. 1032–1041, 2017, doi:
10.11591/ijece.v7i2.pp1032-1041.
[13] E. R. Arboleda, J. L. Balaba, and J. C. L. Espineli, “Chaotic Rivest-Shamir-Adlerman Algorithm with Data Encryption Standard Schedul-
ing,” Bulletin of Electrical Engineering and Informatics, vol. 6, no. 3, pp. 219–227, 2017, doi: 10.11591/eei.v6i3.627.
[14] S. D. Putra, A. S. Ahmad, S. Sutikno, Y. Kurniawan, and A. D. W. Sumari, “Revealing AES Encryption Device Key on 328P Micro-
controllers with Differential Power Analysis,” International Journal of Electrical and Computer Engineering (IJECE), vol. 8, no. 6, pp.
5144-5152, 2018, doi: 10.11591/ijece.v8i6.pp5144-5152.
[15] X. Bonnetain, M. Naya-Plasencia, and A. Schrottenloher, “Quantum Security Analysis of AES,” IACR Transactions on Symmetric
Cryptology, vol. 2019, pp 55-93, 2019, doi: 10.13154/tosc.v2019.i2.55-93.
[16] D. G´erault, P. Lafourcade, M. Minier, and C. Solnon, “Revisiting AES Related-Key Differential Attacks with Constraint Programming,”
Cryptology, vol. 139, 2017. [Online]. Available: https://eprint.iacr.org/2017/139.pdf
[17] A. Biryukov and D. Khovratovich , “Related-Key Cryptanalysis of the Full AES-192 and AES-256,” Advances in Cryptology - ASI-
ACRYPT, 2009, vol. 5912 , pp 1-18, doi: 10.1007/978-3-642-10366-7 1.
[18] J. Kim, S. Hong, and B. Preneel, “Related-key rectangle attacks on reduced AES-192 and AES-256,” International Conference on the
Theory and Application of Cryptology and Information Security, 2007, vol. 4593, pp 225–241, doo: 10.1007/978-3-540-74619-5 15.
[19] S. Lucks, “Ciphers secure against related-key attacks,” International Workshop on Fast Software Encryption, 2004, vol. 3017, pp.
359–370, doi: 10.1007/978-3-540-25937-4 23.
[20] D. Deutsch and R. Jozsa, “Rapid Solution of Problems by Quantum Computation,” Proceedings The Royal Society London A, Mahtem-
atical Physical & Engineering Sciences, 1992, vol. 439, pp. 553–558, doi: 10.1098/rspa.1992.0167.
[21] R, P. Feynman, “Simulating Physics with Computers,” International Journal of Theoretical Physics, vol. 21, pp. 467–488, 1982, doi:
10.1007/BF02650179.
[22] R. A. Perlner and D. A. Cooper, “Quantum resistant public key cryptography: a survey,” IDtrust ’09: Proceedings of the 8th Symposium
on Identity and Trust on the Internet, 2009, doi: 10.1145/1527017.1527028.
[23] M. S. Henriques and N. K. Vernekar, “Using Symmetric and Asymmetric Cryptography to Secure Communication Be-
tween Devices in IOT,” International Conference on IoT and Applications (ICIOT), Nagapattinam, India, 2017, pp. 1-4, doi:
10.1109/ICIOTA.2017.8073643.
[24] A. Hafsa, A. Sghaier, M. Zeghid, J. Malek, and M. Machhout, “An improved co-designed AES-ECC cryptosystem for se-
cure data transmission,” International Journal of Information and Computer Security, vol. 13, no. 1, pp. 118-140, 2020, doi:
10.1504/IJICS.2020.10028859.
[25] R. Riyaldhi, Rojali, and A. Kurniawan, “Improvement of Advanced Encryption Standard Algorithm with Shift Row and S.Box Modifica-
tion Mapping in Mix Coloumn,” Discovery and innovation of computer science technology in artificial intelligence era: The 2nd Interna-
tional Conference on Computer Science and Computational Intelligence, 2017, vol. 116, pp. 401-407, doi: 10.1016/j.procs.2017.10.079.
[26] G. Ribordy, J.-D. Gautier, N. Gisin, O. Guinnard, and H. Zbinden, “Fast and User-Friendly Quantum Key Distribution,” Journal of
Modern Optics, vol. 47, pp. 517-531, 2000, doi: 10.1080/09500340008244057.
[27] M. Aledhari, A. Marhoon, A. Hamad, and F. Saeed, “A New Cryptography Algorithm to Protect Cloud-based Healthcare Services,” Pro-
ceedings of the Second IEEE/ACM Interntional Confrence on Connected Health: Applications, Systems and Engeneering Technologies,
Philadelphia, PA, USA, 2017, pp. 37-43, doi: 10.1109/CHASE.2017.57.
[28] A. D. Wowor and V. B. Liwandouw, “Domain Examination of Chaos Logistics Function As A Key Generator in Cryptography,” Inter-
national Journal of Electrical and Computer Engineering (IJECE), vol. 8, no. 6, pp. 4577–4583, 2018, doi: 10.11591/ijece.v8i6.pp4577-
4583.
[29] S. Ramanujan and M. Karuppiah, “Designing an Algorithm with High Avalanche Effect”, International Journal of Computer Science
and Network Security, vol. 11, no. 1, pp. 106-111, 2011. [Online]. Available: http://paper.ijcsns.org/07 book/201101/20110116.pdf
One to many (new scheme for symmetric cryptography) (Alz Danny Wowor)
9. 770 ❒ ISSN: 1693-6930
BIOGRAPHIES OF AUTHORS
Alz Danny Wowor is currently a lecturer at the Faculty of Information Technology,
Satya Wacana Christian University in Salatiga, Indonesia. He received bachelor and master degree in
mathematics and informatics from Satya Wacana Christian University, in 2005 and 2011 respectively.
His researches are in fields of Primitive Cryptography, Symmetric Cryptography: Block Cipher and
Pseudorandom. He can be contacted at email: alzdanny.wowor@uksw.edu.
Bambang Susanto He is currently a lecturer at the Department of Mathematics, Faculty
of science and mathematics, Satya Wacana Christian University in Salatiga, Indonesia. His obtained
Bachelor Degree in Mathematics from Diponegoro University (Indonesia) in 1988. His received
master and doctor degree in mathematics from Bandung Institute of Technology, in 1992 and 2005
respectively. His researches are in fields of Data Mining, Big Data Analysis, Business Intelligence,
Risk Analysis, Primitive Cryptography, Symmetric Cryptography. He can be contacted at email:
bambang.susanto@uksw.edu.
TELKOMNIKA Telecommun Comput El Control, Vol. 21, No. 4, August 2023: 762–770