The document compares the use of Tabu Search and Simulated Annealing algorithms for cryptanalysis of the Simplified Data Encryption Standard (S-DES) cipher. It provides background on S-DES encryption and describes how Tabu Search and Simulated Annealing work for solving combinatorial optimization problems like cryptanalysis. The paper analyzes the performance of each method on cryptanalyzing S-DES and finds that Tabu Search performs better than Simulated Annealing for this NP-Hard problem.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Dynamic Memory Allocation, Pointers and Functions, Pointers and StructuresSelvaraj Seerangan
After go through this ppt the learners could be able to know the c programming concepts like dynamic memory allocation, pointers and functions and pointers to structures with examples.
Data Structure is a way of collecting and organising data in such a way that we can perform operations on these data in an effective way. Data Structures is about rendering data elements in terms of some relationship, for better organization and storage. For example, we have data player's name "Virat" and age 26. Here "Virat" is of String data type and 26 is of integer data type.
We can organize this data as a record like Player record. Now we can collect and store player's records in a file or database as a data structure. For example: "Dhoni" 30, "Gambhir" 31, "Sehwag" 33
In simple language, Data Structures are structures programmed to store ordered data, so that various operations can be performed on it easily.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Dynamic Memory Allocation, Pointers and Functions, Pointers and StructuresSelvaraj Seerangan
After go through this ppt the learners could be able to know the c programming concepts like dynamic memory allocation, pointers and functions and pointers to structures with examples.
Data Structure is a way of collecting and organising data in such a way that we can perform operations on these data in an effective way. Data Structures is about rendering data elements in terms of some relationship, for better organization and storage. For example, we have data player's name "Virat" and age 26. Here "Virat" is of String data type and 26 is of integer data type.
We can organize this data as a record like Player record. Now we can collect and store player's records in a file or database as a data structure. For example: "Dhoni" 30, "Gambhir" 31, "Sehwag" 33
In simple language, Data Structures are structures programmed to store ordered data, so that various operations can be performed on it easily.
LOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLINGijaia
Imbalanced data is a significant challenge in classification with machine learning algorithms. This is particularly important with log message data as negative logs are sparse so this data is typically imbalanced. In this paper, a model to generate text log messages is proposed which employs a SeqGAN network. An Autoencoder is used for feature extraction and anomaly detection is done using a GRU network. The proposed model is evaluated with three imbalanced log data sets, namely BGL, OpenStack, and Thunderbird. Results are presented which show that appropriate oversampling and data balancing
improves anomaly detection accuracy.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
ALGEBRAIC DEGREE ESTIMATION OF BLOCK CIPHERS USING RANDOMIZED ALGORITHM; UPPE...ijcisjournal
Integral attack is a powerful method to recover the secret key of block cipher by exploiting a characteristic that a set of outputs after several rounds encryption has ( integral distinguisher). Recently, Todo proposed a new algorithm to construct integral distinguisher with division property. However, the existence of integral distinguisher which holds in additional rounds can not be denied by the algorithm. On the contrary, we take an approach to obtain the number of rounds which integral distinguisher does not hold ( upper-bound integral distinguisher). The approach is based on algebraic degree estimation. We execute a random search for a term which has a degree equals the number of all inputted variables. We propose an algorithm and apply it to PRESENT and RECTANGLE. Then, we confirm that there exists no 8-round integral distinguisher in PRESENT and no 9-round integral distinguisher in RECTANGLE. From the facts, integral attack for more than 11-round and 13-round of PRESENT and RECTANGLE is infeasible, respectively.
Enhancement of DES Algorithm with Multi State LogicIJORCS
The principal goal to design any encryption algorithm must be the security against unauthorized access or attacks. Data Encryption Standard algorithm is a symmetric key algorithm and it is used to secure the data. Enhanced DES algorithm works on increasing the key length or complex S-BOX design or increased the number of states in which the information is to be represented or combination of above criteria. By increasing the key length, the number of combinations for key will increase which is hard for the intruder to do the brute force attack. As the S-BOX design will become the complex there will be a good avalanche effect. As the number of states increases in which the information is represented, it is hard for the intruder to crack the actual information. Proposed algorithm replace the predefined XOR operation applied during the 16 round of the standard algorithm by a new operation called “Hash function” depends on using two keys. One key used in “F” function and another key consists of a combination of 16 states (0,1,2…13,14,15) instead of the ordinary 2 state key (0, 1). This replacement adds a new level of protection strength and more robustness against breaking methods.
This presentation describes an approach to analysis of common recursion patterns in purely functional code. The goal is to recognize common patterns and provide the user with hints how to improve them using standard FP techniques (map, fold etc.). This presentation was given as part of the Advanced Functional Programming Seminar at Utrecht University.
On the Identification Protocols of Versions 4 and 6inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
NLP Project: Machine Comprehension Using Attention-Based LSTM Encoder-Decoder...Eugene Nho
Machine comprehension remains a challenging open area of research. While many question answering models have been explored for existing datasets, little work has been done with the newly released MS MARCO dataset, which mirrors the reality much more closely and poses many unique challenges. We explore an end-to-end neural architecture with attention mechanisms for comprehending relevant information and generating text answers for MS MARCO.
A Critical Reassessment of Evolutionary Algorithms on the Cryptanalysis of th...ijcisjournal
In this paper we analyze the cryptanalysis of the simplified data encryption standard algorithm using metaheuristics and in particular genetic algorithms. The classic fitness function when using such an algorithm is to compare n-gram statistics of a the decrypted message with those of the target message. We show that using such a function is irrelevant in case of Genetic Algorithm, simply because there is no correlation between the distance to the real key (the optimum) and the value of the fitness, in other words, there is no hidden gradient. In order to emphasize this assumption we experimentally show that a genetic algorithm perform worse than a random search on the cryptanalysis of the simplified data encryption standard algorithm.
In this paper we analyze the cryptanalysis of the simplified data encryption standard algorithm using metaheuristics
and in particular genetic algorithms. The classic fitness function when using such an algorithm
is to compare n-gram statistics of a the decrypted message with those of the target message. We show that
using such a function is irrelevant in case of Genetic Algorithm, simply because there is no correlation
between the distance to the real key (the optimum) and the value of the fitness, in other words, there is no
hidden gradient. In order to emphasize this assumption we experimentally show that a genetic algorithm
perform worse than a random search on the cryptanalysis of the simplified data encryption standard
algorithm.
LOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLINGijaia
Imbalanced data is a significant challenge in classification with machine learning algorithms. This is particularly important with log message data as negative logs are sparse so this data is typically imbalanced. In this paper, a model to generate text log messages is proposed which employs a SeqGAN network. An Autoencoder is used for feature extraction and anomaly detection is done using a GRU network. The proposed model is evaluated with three imbalanced log data sets, namely BGL, OpenStack, and Thunderbird. Results are presented which show that appropriate oversampling and data balancing
improves anomaly detection accuracy.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
ALGEBRAIC DEGREE ESTIMATION OF BLOCK CIPHERS USING RANDOMIZED ALGORITHM; UPPE...ijcisjournal
Integral attack is a powerful method to recover the secret key of block cipher by exploiting a characteristic that a set of outputs after several rounds encryption has ( integral distinguisher). Recently, Todo proposed a new algorithm to construct integral distinguisher with division property. However, the existence of integral distinguisher which holds in additional rounds can not be denied by the algorithm. On the contrary, we take an approach to obtain the number of rounds which integral distinguisher does not hold ( upper-bound integral distinguisher). The approach is based on algebraic degree estimation. We execute a random search for a term which has a degree equals the number of all inputted variables. We propose an algorithm and apply it to PRESENT and RECTANGLE. Then, we confirm that there exists no 8-round integral distinguisher in PRESENT and no 9-round integral distinguisher in RECTANGLE. From the facts, integral attack for more than 11-round and 13-round of PRESENT and RECTANGLE is infeasible, respectively.
Enhancement of DES Algorithm with Multi State LogicIJORCS
The principal goal to design any encryption algorithm must be the security against unauthorized access or attacks. Data Encryption Standard algorithm is a symmetric key algorithm and it is used to secure the data. Enhanced DES algorithm works on increasing the key length or complex S-BOX design or increased the number of states in which the information is to be represented or combination of above criteria. By increasing the key length, the number of combinations for key will increase which is hard for the intruder to do the brute force attack. As the S-BOX design will become the complex there will be a good avalanche effect. As the number of states increases in which the information is represented, it is hard for the intruder to crack the actual information. Proposed algorithm replace the predefined XOR operation applied during the 16 round of the standard algorithm by a new operation called “Hash function” depends on using two keys. One key used in “F” function and another key consists of a combination of 16 states (0,1,2…13,14,15) instead of the ordinary 2 state key (0, 1). This replacement adds a new level of protection strength and more robustness against breaking methods.
This presentation describes an approach to analysis of common recursion patterns in purely functional code. The goal is to recognize common patterns and provide the user with hints how to improve them using standard FP techniques (map, fold etc.). This presentation was given as part of the Advanced Functional Programming Seminar at Utrecht University.
On the Identification Protocols of Versions 4 and 6inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
NLP Project: Machine Comprehension Using Attention-Based LSTM Encoder-Decoder...Eugene Nho
Machine comprehension remains a challenging open area of research. While many question answering models have been explored for existing datasets, little work has been done with the newly released MS MARCO dataset, which mirrors the reality much more closely and poses many unique challenges. We explore an end-to-end neural architecture with attention mechanisms for comprehending relevant information and generating text answers for MS MARCO.
A Critical Reassessment of Evolutionary Algorithms on the Cryptanalysis of th...ijcisjournal
In this paper we analyze the cryptanalysis of the simplified data encryption standard algorithm using metaheuristics and in particular genetic algorithms. The classic fitness function when using such an algorithm is to compare n-gram statistics of a the decrypted message with those of the target message. We show that using such a function is irrelevant in case of Genetic Algorithm, simply because there is no correlation between the distance to the real key (the optimum) and the value of the fitness, in other words, there is no hidden gradient. In order to emphasize this assumption we experimentally show that a genetic algorithm perform worse than a random search on the cryptanalysis of the simplified data encryption standard algorithm.
In this paper we analyze the cryptanalysis of the simplified data encryption standard algorithm using metaheuristics
and in particular genetic algorithms. The classic fitness function when using such an algorithm
is to compare n-gram statistics of a the decrypted message with those of the target message. We show that
using such a function is irrelevant in case of Genetic Algorithm, simply because there is no correlation
between the distance to the real key (the optimum) and the value of the fitness, in other words, there is no
hidden gradient. In order to emphasize this assumption we experimentally show that a genetic algorithm
perform worse than a random search on the cryptanalysis of the simplified data encryption standard
algorithm.
An Efficient FPGA Implementation of the Advanced Encryption Standard Algorithmijsrd.com
A proposed FPGA-based implementation of the Advanced Encryption Standard (AES) algorithm is presented in this paper. This implementation is compared with other works to show the efficiency. The design uses an iterative looping approach with block and key size of 128 bits, lookup table implementation of S -box. This gives low complexity architecture and easily achieves low latency as well as high throughput. Simulation results, performance results are presented and compared with previous reported designs.
Design and Implementation A different Architectures of mixcolumn in FPGAVLSICS Design
This paper details Implementation of the Encryption algorithm AES under VHDL language In FPGA by using different architecture of mixcolumn. We then review this research investigates the AES algorithm in FPGA and the Very High Speed Integrated Circuit Hardware Description language (VHDL). Altera Quartus II software is used for simulation and optimization of the synthesizable VHDL code. The set of transformations of both Encryptions and decryption are simulated using an iterative design approach in order to optimize the hardware consumption. Altera Cyclone III Family devices are utilized for hardware evaluation.
Virtual private networks (VPN) provide remotely secure connection for clients to exchange information with company networks. This paper deals with Site-to-site IPsec-VPN that connects the company intranets. IPsec-VPN network is implemented with security protocols for key management and exchange, authentication and integrity using GNS3 Network simulator. The testing and verification analyzing of data packets is done using both PING tool and Wireshark to ensure the encryption of data packets during data exchange between different sites belong to the same company.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
ENSEMBLE OF BLOWFISH WITH CHAOS BASED S BOX DESIGN FOR TEXT AND IMAGE ENCRYPTIONIJNSA Journal
The rapid and extensive usage of Internet in the present decade has put forth information security as an utmost concern. Most of the commercial transactions taking place over the Internet involves a wide variety of data including text, images, audio and video. With the increasing use of digital techniques for transmitting and storing Multimedia data, the fundamental issue of protecting the confidentiality, integrity and authenticity of the information poses a major challenge for security professionals and hassled to the major developments in Cryptography . In cryptography, an S-Box (Substitution-box) is a basic component of symmetric key algorithms, which performs substitution and is typically used to make the relationship between the key and the cipher text non linear and most of the symmetric key algorithms like DES, Blowfish makes use of S boxes. This paper proposes a new method for design of S boxes based on chaos theory. Chaotic equations are popularly known for its randomness, extreme sensitivity to initial conditions and ergodicity. The modified design has been tested with blowfish algorithm which has no effective crypt analysis reported against its design till date because of its salient design features including the key dependant s boxes and complex key generation process. However every new key requires pre-processing equivalent to encrypting about 4 kilobytes of text, which is very slow compared to other block ciphers and it prevents its usage in memory limited applications and embedded systems. The modified design of S boxes maintains the non linearity [3] [5] and key dependency factors of S boxes with a major reduction in time complexity of generation of S boxes and P arrays. The algorithm has been implemented and the proposed design has been analyzed for size of key space, key sensitivity and Avalanche effect. Experimental results on text and Image Encryption show that the modified design of key generation continues to offer the same
level of security as the original Blowfish cipher with a less computational overhead in key generation.
Comparison of AES and DES Algorithms Implemented on Virtex-6 FPGA and Microbl...IJECEIAES
Encryption algorithms play a dominant role in preventing unauthorized access to important data. This paper focus on the implementations of Data Encryption Standard (DES) and Advanced Encryption Standard (AES) algorithms on Microblaze soft core Processor and also their implementations on XC6VLX240t FPGA using Verilog Hardware Description language. This paper also gives a comparison of the issues related to the hardware and software implementations of the two cryptographic algorithms.
Similar to Welcome to International Journal of Engineering Research and Development (IJERD) (20)
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
The paper addresses the importance of welding design to prevent corrosion at steel. Welding is
used to join pipe, profiles at bridges, spindle, and a lot more part of engineering construction. The
problems happened associated with welding are common issues in these fields, especially corrosion.
Corrosion can be reduced with many methods, they are painting, controlling humidity, and also good
welding design. In the research, it can be found that reducing residual stress on the welding can be
solved in corrosion rate reduction problem.
Preheating on 500oC and 600oC give better condition to reduce corosion rate than condition after
preheating 400oC. For all welding groove type, material with 500oC and 600oC preheating after 14 days
corrosion test is 0,5%-0,69% lost. Material with 400oC preheating after 14 days corrosion test is 0,57%-0,76%
lost.
Welding groove also influence corrosion rate. X and V type welding groove give better condition to reduce
corrosion rate than use 1/2V and 1/2 X welding groove. After 14 days corrosion test, the samples with
X welding groove type is 0,5%-0,57% lost. The samples with V welding groove after 14 days corrosion test is
0,51%-0,59% lost. The samples with 1/2V and 1/2X welding groove after 14 days corrosion test is 0,58%-
0,71% lost.
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
One of the major environmental concerns is the disposal of the waste materials and utilization of
industrial by products. Lime stone quarries will produce millions of tons waste dust powder every year. Having
considerable high degree of fineness in comparision to cement this material may be utilized as a partial
replacement to cement. For this purpose an experiment is conducted to investigate the possibility of using lime
stone powder in the production of SCC with combined use GGBS and how it affects the fresh and mechanical
properties of SCC. First SCC is made by replacing cement with GGBS in percentages like 10, 20, 30, 40, 50 and
by taking the optimum mix with GGBS lime stone powder is blended to mix in percentages like 5, 10, 15, 20 as
a partial replacement to cement. Test results shows that the SCC mix with combination of 30% GGBS and 15%
limestone powder gives maximum compressive strength and fresh properties are also in the limits prescribed by
the EFNARC.
Study of Macro level Properties of SCC using GGBS and Lime stone powder
Welcome to International Journal of Engineering Research and Development (IJERD)
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN : 2278-800X, www.ijerd.com
Volume 5, Issue 3 (December 2012), PP. 07-12
Comparative Cryptanalysis of Simplified-Data Encryption
Standard Using Tabu Search and Simulated Annealing
Methods
Rajashekarappa1, Dr. K M Sunjiv Soyjaudah2
1
Department of Computer Science and Engineering, JSS Academy of Technical Education, Vacoas, Mauritius.
(Research Scholar, Jain University, Bangalore)
2
Department of Electrical & Electronic Engineering, University of Mauritius, Reduit, Mauritius.
Abstract:- This paper presents an approach for the comparative cryptanalysis of Simple Data
Encryption Standard (S-DES) using Tabu Search and Simulated Annealing methods. In this paper,
cipher text only attack is adopted and varieties of optimum keys are generated based on the cost
function values. The goal of this paper is three fold. First we want to make a study about how
evolutionary computation techniques can efficiently solve the NP-Hard combinatorial problem. For
achieving this goal we test several evolutionary computation techniques like Tabu Search algorithm
and simulated annealing for the cryptanalysis of simplified data encryption standard problem (SDES).
Second was a comparison between Tabu Search algorithm and simulated annealing were made in order
to investigate the performance for the cryptanalysis on SDES. The methods were tested and extensive
computational results show that Tabu Search algorithm performs better than simulated annealing for
such type of NP-Hard combinatorial problem. This paper represents our first effort toward efficient
Tabu Search algorithm for the cryptanalysis of SDES. And third was cryptanalysis data could be store
for long time with help of Self Monitoring Analysis and Reporting Technology (SMART) Copyback
technique.
Keywords:- Cryptanalysis, Simplified Data Encryption Standard (S-DES), Tabu Search, Plain text,
Cipher text, Simulated Annealing, Cipher text attack.
I. INTRODUCTION
The cryptanalysis of simplified data encryption standard can be formulated as NP-Hard combinatorial
problem. Solving such problems requires effort (e.g., time and/or memory requirement) which increases with
the size of the problem. Techniques for solving combinatorial problems fall into two broad groups – traditional
optimization techniques (exact algorithms) and nontraditional optimization techniques (approximate algorithms).
A traditional optimization technique guarantees that the optimal solution to the problem will be found. The
traditional optimization techniques like branch and bound, simplex method, Simulated Annealing search
algorithm etc methodology is very inefficient for solving combinatorial problem because of their prohibitive
complexity (time and memory requirement). Nontraditional optimization techniques are employed in an attempt
to find an adequate solution to the problem. A nontraditional optimization technique – Tabu search algorithm,
simulated annealing and tabu search were developed to provide a robust and efficient methodology for
cryptanalysis. The aim of these techniques to find sufficient ―good‖ solution efficiently with the characteristics
of the problem, instead of the global optimum solution, and thus it also provides attractive alternative for the
large scale applications. These nontraditional optimization techniques demonstrate good potential when applied
in the field of cryptanalysis. The objective of the study is to determine the efficiency and accuracy of Tabu
Search algorithm for the cryptanalysis of SDES. To compare the relative performance of Simulated Annealing
with tabu search[1]. In this paper users have variety of requirements of data storage that can be addressed using
Self Monitoring Analysis and Reporting Technology (SMART) Copyback.
The rest of the paper is organized as follows: Section 2 presents the literature review. Section 3 gives a
brief overview of S-DES, Section 4 gives the overview of Tabu Search, Section 5 gives the algorithm of
Simulated Annealing and Section 6 gives the Self Monitoring Analysis and Reporting Technology (SMART)
Copyback. Experimental results are discussed in Section 7. Section 8 Concludes the paper and Future works.
II. RELATED WORK
If a particular solution is reached it becomes ‗tabu‘ for some number of transitions, generally referred
to as the solution‘s tabu tenure. If a solution is tabu, the search is normally prevented from moving to that
7
2. Comparative Cryptanalysis of Simplified-Data Encryption Standard…
solution, i.e., the local neighborhood from which the next solution is chosen excludes those solutions that are
currently tabu. Conceptually, the currently tabu solutions together with their remaining tabu tenures form a ‗tabu
list‘. In its simplest form, with common tabu tenure of the list becomes a FIFO queue. The most recently visited
solution is added and the solution visited moves ago is removed. The tabu list implements a recency criterion. It
prevents the search revisiting solutions in the short term and so short cycles are prevented. In Garg applied an
attack on transposition cipher using tabu search & simulated annealing [1].
III. THE S-DES ALGORITHM
This section briefly gives the overview of S-DES Algorithm. The SDES encryption algorithm takes an
8-bit block of plaintext and a 10-bit key as input and produces an 8-bit block of ciphertext as output. The
decryption algorithm takes an 8-bit block of ciphertext and the same 10-bit key used for encryption as input and
produces the original 8-bit block of plaintext as output. The encryption algorithm uses five basic functions:
1. An initial permutation (IP). 2. A complex function called fK which involves both permutation and
substitution operations and depends on a key input 3. A simple permutation function (SW) that switches the two
halves of the data. 4. The function fK a TS in and 5. A permutation function that is the inverse of the initial
permutation (IP-1). The function fK takes as input the data passing through the encryption algorithm and an 8-bit
key [4].
A. Key Generation
For key generation, a 10-bit key is considered from which two 8-bit sub keys are generated. In this case,
the Key is first subjected to a permutation P10= [3 5 2 7 4 10 1 9 8 6], then a shift operation is performed. The
numbers in the array represent the value of that bit in the original10-bit key. The output of the shift operation
then passes through a permutation function that produces an 8-bit output P8 = [6 3 7 4 8 5 10 9] for the first sub
key (K1). The output of the shift operation also feeds into another shift operation and another instance of P8 to
produce the second sub key K2. In all bit strings, the leftmost position corresponds to the first bit.
B. Encryption Algorithm
The block schematic of the SDES encryption algorithm is the Encryption process involves the
sequential application of five functions:
1. Initial and final permutation (IP): The input to the algorithm is an 8-bit block of plaintext, which is first
permuted using the IP function IP = [2 6 3 1 4 8 5 7]. This retains all 8-bits of the plaintext but mixes them up.
At the end of the algorithm, the inverse permutation is applied; the inverse permutation is done by applying,
IP-1 = [4 1 3 5 7 2 8 6] Where, IP -1(IP(X)) = X.
2. Function fK: The function fK, which is the complex component of S-DES, consists of a combination of
permutation and substitution functions. The functions are given as follows:
fK (L, R) = (L XOR f(R, key), R)where, L, R be the left 4-bits and right 4-bits of the input, XOR is the
exclusive-OR operation and key is a sub -key. Computation of f(R, key) is done as follows.
i. Apply expansion/permutation E/P= [4 1 2 3 2 3 4 1] to input 4-bits.
ii. Add the 8-bit key (XOR).
iii. Pass the left 4-bits through S-Box S0 and the right 4-bits through S-Box S1.
iv. Apply permutation P4 = [2 4 3 1].
The two S-boxes are defined as follows:
S0 S1
1032 0123
3210 2013
0213 3010
3132 2103
The S-boxes operate as follows: The first and fourth input bits are treated as 2-bit numbers that specify
a row of the S-box and the second and third input bits specify a column of the S box. The entry in that row and
column in base 2 is the 2-bit output.
3. The Switch Function (SW): Since the function fK allows only the leftmost 4-bits of the input, the switch
function (SW) interchanges the left and right 4-bits so that the second instance of fK operates on different 4-bits.
In this second instance, the E/P, S0, S1 and P4 functions are the same as above but the key input is K2 [6].
8
3. Comparative Cryptanalysis of Simplified-Data Encryption Standard…
IV. TABU SEARCH
This section presents application of tabu search includes as a subroutine a local search procedure that
seems appropriate for the problem being addressed [2]. A local search procedure operates just like a local
improvement procedure except that it may not require that each new trial solution must be better than the
preceding trial solution [10]. The process begins by using this procedure as a local improvement procedure in
the usual way (i.e, only accepting an improved solution at each iteration) to find a local optimum [3]. A key
strategy of tabu search is that it then continues the search by allowing non-improving moves to the best
solutions in the neighborhood of the local optimum [8]. Once a point is reached where better solutions can be
found in the neighborhood of the current trial solution, the local improvement procedure is reapplied to find a
new local optimum [6]. This use of memory to guide the search by using tabu lists to record some of the recent
history of the search is a distinctive feature of tabu search. This feature has roots in the field of artificial
intelligence [7]. Initialization: Start with a feasible initial trial solution. Iteration: Use an appropriate local search
procedure to define the feasible moves into the local neighborhood of the current trial solution [5]. Eliminate
from consideration any move on the current tabu list unless that move would result in a better solution than the
best trial solution found so far. Update the tabu list to forbid cycling back to what had been the current trial
solution. If the tabu list already had been full, delete the oldest member of the tabu list to provide more
flexibility for future moves. Two randomly chosen key elements are swapped to generate candidate solutions. In
each iteration, the best new key formed replaces the worst existing one in the tabu list [9].
The algorithm is presented as below.
1. Input: Intercepted ciphertext, the key size P, and the language statistics.
2. Initialise parameters: The size of the tabu list STABU, the size of the list of possibilities considered in each
iteration SPOSS, and the maximum number of iterations MAX.
3. Initialise the tabu list with random and distinct keys and calculate the cost for each key in the tabu list.
4. For I =1,… , MAX do:
a. Find the best key with the lowest cost in the current tabulist, KBEST.
b. For j=1,…, SPOSS do:
i. apply the perturbation mechanism to produce a new key KNEW.
ii. Check if KNEW is already in the list of possibilities generated for this iteration or the tabu list.
If so, return to step 4(b) i.
iii. Add KNEW to the list of possibilities for this iteration.
c. From the list of possibilities for this iteration, find the key with the lowest cost, PBEST.
d. From the tabu list, find the key with the highest cost, TWORST.
e. While the cost of PBEST is less than the cost of TWORST:
i. Replace TWORST with PBEST.
ii. Find the new PBEST.
iii. Find the new TWORST.
5. Output the best solution from the tabu list, KBEST(the one with the least cost).
An important distinction in TS arises by differentiating between short term memory and longer term
memory. Each type of memory is accompanied by its own special strategies. However, the effect of both types
of memory may be viewed as modifying the neighbourhood N(x) of the current solution x. The modified
neighborhood, which we denote by N*(x), is the result of maintaining a selective history of the states
encountered during the search.
In the TS strategies based on short term considerations, N*(x) characteristically is a subset of N(x), and
the tabu classification serves to identify elements of N(x) excluded from N*(x). In TS strategies that include
longer term considerations, N*(x) may also be expanded to include solutions not ordinarily found in N(x).
Characterized in this way, TS may be viewed as a dynamic neighborhood method. This means that the
neighborhood of x is not a static set, but rather a set that can change according to the history of the search. This
feature of a dynamically changing neighborhood also applies to the consideration of selecting different
component neighborhoods from a compound neighborhood that encompasses multiple types or levels of moves,
and provides an important basis for parallel processing. Characteristically, aTS process based strictly on short
term strategies may allow a solution x to be visited more than once, but it is likely that the corresponding
reduced neighborhood N*(x) will be different each time. With the inclusion of longer term considerations, the
likelihood of duplicating a previous neighborhood upon revisiting a solution, and more generally of making
choices that repeatedly visit only a limited subset of X, is all but nonexistent. From a practical stand point, the
method will characteristically identify an optimal or near optimal solution long before a substantial portion of X
is examined. A crucial aspect of TS involves the choice of an appropriate definition of N*(x). Due to the
exploitation of memory, N*(x) depends upon the trajectory followed in moving from one solution to the next (or
upon a collection of such trajectories in a parallel processing environment).
9
4. Comparative Cryptanalysis of Simplified-Data Encryption Standard…
V. SIMULATED ANNEALING
In this paper annealing is the process of slowly cooling a heated metal in order to attain a minimum
energy state. The idea of mimicking the annealing process has been efficiently exploited by Kirkpatrick et al.[11]
to solve combinatorial optimization problems. The algorithm is initialized with a random solution to the
problem being solved and a starting temperature T0. The temperature is slowly decreased and at each
temperature, a number of attempts are made to perturb the current solution. At each perturbed temperature, a
change in the cost function ΔE is determined. If ΔE<0, then the proposed perturbation is accepted; otherwise it
is accepted with a probability indicated by the Metropolis equation given by Probability (E1E2) = e (-ΔE/T),
where E1 and E2 are the cost functions, ΔE is the change in cost function and T is the current temperature. If the
proposed change is accepted, then the current solution is updated. The temperature is reduced when a predefined
number of attempts have been made to update the current solution. Possibilities of termination are when a
certain minimum temperature is reached or a certain number of temperature reductions have occurred; or the
current solution has not changed for a number of iterations. The Simulated Annealing algorithm is presented as
below:
1. Set the initial temperature, T (0) .
2. Generate an initial solution - arbitrarily set to the identity transformation (could be randomly generated or
otherwise).
3. Evaluate the cost function for the initial solution. Call this C(0).
4. For temperate T do many (eg., 100 × M) times:
Generate a new solution by modifying the current one in some manner evaluate the cost function for
the newly proposed solution. Consult the Metropolis function to decide whether or not the newly proposed
solution will be accepted. If accepted, update the current solution and its associated cost. If the number of
accepted transitions for temperature T exceeds some limit (eg. 10 × M) then jump to Step 5.
5. If the number of accepted transitions for temperature T was zero then stop (return the current solution as the
best), otherwise reduce the temperature (eg. T ( i +1)= Ti × 0.95) and return to step 4.
VI. SELF MONITORING ANALYSIS AND REPORTING TECHNOLOGY(SMART)
COPYBACK
In this paper users have variety of requirements of data storage that can be addressed using Self
Monitoring Analysis and Reporting Technology (SMART) Copyback [12]. It is estimated that over 94% of all
new information produced in the world is being stored on magnetic media, most of it on Physical Disks (PD).
Despite their importance, there is relatively little published work on the failure patterns of Physical Disks (PD),
and the key factors that affect their lifetime. Most available data are either based on extrapolation from
accelerated aging experiments or from relatively modest sized field studies. Moreover, larger population studies
rarely have the infrastructure in place to collect health signals from components in operation, which is critical
information for detailed failure analysis. It presents the data collected from detailed observations of a large disk
drive population in a production Internet services deployment. The population observed is many times larger
than that of previous studies. In addition to presenting failure statistics and analyze the correlation between
failures and several parameters generally believed to impact longevity. Analysis identifies several parameters
from the Physical Disks (PD), self monitoring facility (SMART) that correlate highly with failures. Despite this
high correlation conclude that models based on SMART parameters alone are unlikely to be useful for
predicting individual drive failures. Surprisingly, it found that temperature and activity levels were much less
correlated with Physical Disk (PD) failures
In this study we report on the failure characteristics of consumer-grade Physical Disks (PD). Analysis
is made possible by a new highly parallel health data collection and analysis infrastructure, and by the sheer size
of our computing deployment.
Our results confirm the findings of some of the SMART Copyback parameters are well-correlated with
higher failure probabilities. First errors in reallocation, offline reallocation, and probational counts are also
strongly correlated to higher failure probabilities. Despite those strong correlations, we find that failure
prediction models based on SMART Copyback parameters alone are a likely to be severely limited in their
prediction accuracy, given that a large fraction of our failed Physical Disks have shown no SMART error
signals whatsoever. This results suggests that SMART Copyback models are more useful in predicting trends
for large aggregate populations than for individual components. It also suggests that powerful predictive models
need to make use of signals beyond those provided by SMART Copyback. In this thesis we report on the failure
characteristics of consumer-grade Physical Disks (PD). The drive vendors builds a logic in to the drives to make
drives smart so that the user gets warning signal as a ―predictive failure‖ whenever the drive is about to go bad
for some reason. The drives built with this kind of logic are called a ―SMART‖ drive which is an acronym for
10
5. Comparative Cryptanalysis of Simplified-Data Encryption Standard…
―Self-Monitoring Analysis and Reporting Technology‖. Using this technique and make use of cryptanalysis data
keeping for long time without fail.
VII. PERFORMANCE COMPARISION OF TABU SEARCH AND SIMULATED
ANNEALING
This section presents performance and comparison among Tabu Search algorithm is recovered from the
key as compared with Simulated Annealing as shown in Table 1. A number of experiments is carried out to
outline the effectiveness of Tabu Search. The Tabu Search algorithm is coded in MATLAB 7, and tested on
more than sixty different ciphertext. Among the unigrams, bigrams and trigrams, Unigram is more useful and
the benefit of trigram over digram is small. For S-DES, the key size required is 10bits. The plain text and cipher
text size are 8 bits. Initially, start with a feasible initial trial solution size of 40 is taken with a chromosome size
of 10 bits i.e., 40 sets of 10 bit keys are taken randomly and the known cipher text is decrypted using the initial
keys. The results are promising, yielding solutions between 60 and 112 seconds on a standard Intel Pentium
Desktop computer with a 2.93GHz processor and 2GB of RAM. Four independent runs are executed for each of
the 60 problems.
Initialization 40, Chromosome Size 10 bits, No. of Iteration 10, Mating Scheme Best-Worst Mating.
The total number of generations taken is 10. The key was recovered on an average of 5 generations. The size of
the search space used by TS is only 40 where as in Simulated Annealing attack, it is 65. Thus, in case of
cryptanalysis using TS there is reduction in search space by a factor of 4 approximately. In the worst case the
number of generations is increased to 15.
In this section a number of experiments are carried out which outlines the effectiveness of both the
algorithm described above. The purpose of these experiments is to compare the performance of Simulated
Annealing algorithm approach with tabu search approach for the cryptanalysis of simplified SDES algorithm.
The experiments were implemented in MATLAB 7 on a Pentium IV(2.93 GHz). Experimental results obtained
from these algorithms were generated with 150 runs per data point e.g. ten different messages were created for
both the algorithms and each algorithm was run 20 times per message. The best result for each message was
averaged to produce data point.
This paper shows the average number of key elements (out of 10) correctly recovered versus the
amount of cipher text and the computation time to recover the keys from the search space. The experimental
result shows for amounts of cipher text ranging from 100 to 1000 character. Tabu Search results are better than
the Simulated Annealing results. The amount of Cipher text was 1000 and Tabu Search took ten minutes to
recovered from the number of bits matched in the key was 9(out of 10), but in Simulated Annealing took fifteen
minutes to recovered from the number of bits matched in the key was 7(out of 10). Tabu Search can be
extended to attack DES which uses 64 bit key size.
VIII. CONCLUSIONS
In this paper has demonstrated that the tabu search and simulated annealing are ideally suited for the
cryptanalysis of Simplified Data Encryption Standard. Thus these techniques offer a lot of promises for attacks
of the ciphers. The time complexity of the proposed approach has been reduced drastically when compared to
the Simulated Annealing Algorithm. Experimental results demonstrate good performance for tabu search than
simulated annealing few parameters need to be tuned for the best possible performance. Though SDES is a
simple encryption algorithm, this is a promising method and can be adopted to handle other complex block
ciphers like DES and AES. The cost function values used here can be applied for other block ciphers also. The
future works are extending this approach for attacking DES and AES ciphers. Cryptanalysis data could be store
for long time with help of Self Monitoring Analysis and Reporting Technology (SMART) Copyback technique.
REFERENCES
[1]. Garg Poonam, Shastri Aditya, Agarwal D. C, ―Genetic Algorithm, Tabu Search & Simulated annealing
Attack on Transposition Cipher‖, proceeding of Third AIMS International conference on management
at IIMA – 2006, pg 983-989.
[2]. Frederick S. Hillier, Gerald J. Lieberman, ―Introduction to Operations Research Concepts and Cases‖,
Eighth edition, McGraw- Hill, 2009.
[3]. William Stallings, ―Cryptography and Network Security Principles and Practices‖, Fourth edition,
McGraw- Hill, 2003.M. Young, The Technical Writer‘s Handbook. Mill Valley, CA: University
Science, 1989.
[4]. Behrouz A. Forouzan, ―Cryptography and Network Security‖, Firstedition, McGraw- Hill, 2006
[5]. James Kennedy and Russell Eberhart, ―Particle Swarm Optimisation‖, Proceedings of the IEEE
International Conference on Neural Networks,pp.1942-1948, 1995.
11