IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document provides an introduction to deep learning. It discusses how deep learning uses multiple layers of nonlinear processing to automatically extract features from data, avoiding the need for manual feature engineering. Deep belief networks, which are composed of stacked restricted Boltzmann machines, are a widely used deep learning model. Training deep networks is challenging, but this is addressed by an unsupervised layer-wise pretraining approach followed by supervised fine-tuning of the entire network. The document reviews literature on deep learning models and applications.
Encryption and decryption are both methods used to ensure the secure passing of messages and other sensitive documents and information. The encryption process plays a major factor in our technology advanced lives. Encryption basically means to convert the message into code or scrambled form. Advanced Encryption Standard (AES) is a specification for the encryption of electronic data. It has been adopted by the U.S. government and is now used worldwide. AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data. This paper defines the method to enhance the block and key length of the conventional AES.
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region and calculates the average of the entire region separately. This average is centres of the region which is used for the purpose of clustering by using FCM algorithm. Once clustering process over labelling of semi supervised data is done, then whole samples would be classify by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark datasets and it is found that, on increasing value of training parameters number of hidden neurons and training time both are getting decrease. The result reported, using real character recognition data set and result will compare with existing semi-supervised classifier, the proposed approach learned with semi-supervised leads to higher classification accuracy.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
The document summarizes hierarchical clustering techniques. It discusses two main types of hierarchical clustering - agglomerative and divisive. It presents an example dendrogram to illustrate hierarchical clustering. It also summarizes a research paper on a new algorithm called CLUBS that performs faster and more accurate hierarchical clustering compared to existing algorithms. The document concludes by discussing experiments applying hierarchical clustering on two biomedical datasets containing gene expression data to group patients and cell samples.
This document provides an overview of communication complexity and deterministic communication complexity. It begins with defining the problem setup, which involves two parties (Alice and Bob) trying to compute a function f based on their private inputs using the minimum communication. It then discusses protocol trees, which model communication protocols, and shows they partition the input space into rectangles. It introduces combinatorial rectangles and shows how the minimum number of rectangles needed to partition the space can provide a lower bound on communication complexity. The document also discusses fooling sets, another technique to lower bound communication complexity, and previews upcoming topics on nondeterministic and randomized communication complexity.
The document describes the Rough K-Means clustering algorithm. It takes a dataset as input and outputs lower and upper approximations of K clusters. It works as follows:
1. Objects are randomly assigned to initial clusters. Cluster centroids are then computed.
2. Objects are assigned to clusters based on the ratio of their distance to closest versus second closest centroid. Objects on the boundary may belong to multiple clusters.
3. Cluster centroids are recomputed based on the new cluster assignments. The process repeats until cluster centroids converge.
An example is provided to illustrate the algorithm on a sample dataset with 6 objects and 2 features.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document provides an introduction to deep learning. It discusses how deep learning uses multiple layers of nonlinear processing to automatically extract features from data, avoiding the need for manual feature engineering. Deep belief networks, which are composed of stacked restricted Boltzmann machines, are a widely used deep learning model. Training deep networks is challenging, but this is addressed by an unsupervised layer-wise pretraining approach followed by supervised fine-tuning of the entire network. The document reviews literature on deep learning models and applications.
Encryption and decryption are both methods used to ensure the secure passing of messages and other sensitive documents and information. The encryption process plays a major factor in our technology advanced lives. Encryption basically means to convert the message into code or scrambled form. Advanced Encryption Standard (AES) is a specification for the encryption of electronic data. It has been adopted by the U.S. government and is now used worldwide. AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data. This paper defines the method to enhance the block and key length of the conventional AES.
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region and calculates the average of the entire region separately. This average is centres of the region which is used for the purpose of clustering by using FCM algorithm. Once clustering process over labelling of semi supervised data is done, then whole samples would be classify by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark datasets and it is found that, on increasing value of training parameters number of hidden neurons and training time both are getting decrease. The result reported, using real character recognition data set and result will compare with existing semi-supervised classifier, the proposed approach learned with semi-supervised leads to higher classification accuracy.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
help.mbaassignments@gmail.com
or
call us at : 08263069601
The document summarizes hierarchical clustering techniques. It discusses two main types of hierarchical clustering - agglomerative and divisive. It presents an example dendrogram to illustrate hierarchical clustering. It also summarizes a research paper on a new algorithm called CLUBS that performs faster and more accurate hierarchical clustering compared to existing algorithms. The document concludes by discussing experiments applying hierarchical clustering on two biomedical datasets containing gene expression data to group patients and cell samples.
This document provides an overview of communication complexity and deterministic communication complexity. It begins with defining the problem setup, which involves two parties (Alice and Bob) trying to compute a function f based on their private inputs using the minimum communication. It then discusses protocol trees, which model communication protocols, and shows they partition the input space into rectangles. It introduces combinatorial rectangles and shows how the minimum number of rectangles needed to partition the space can provide a lower bound on communication complexity. The document also discusses fooling sets, another technique to lower bound communication complexity, and previews upcoming topics on nondeterministic and randomized communication complexity.
The document describes the Rough K-Means clustering algorithm. It takes a dataset as input and outputs lower and upper approximations of K clusters. It works as follows:
1. Objects are randomly assigned to initial clusters. Cluster centroids are then computed.
2. Objects are assigned to clusters based on the ratio of their distance to closest versus second closest centroid. Objects on the boundary may belong to multiple clusters.
3. Cluster centroids are recomputed based on the new cluster assignments. The process repeats until cluster centroids converge.
An example is provided to illustrate the algorithm on a sample dataset with 6 objects and 2 features.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Digital Watermarking through Embedding of Encrypted and Arithmetically Compre...IJNSA Journal
In this paper, we have encrypted a text to an array of data bits through arithmetic coding technique. For this, we have assigned a unique range for both, a number of characters and groups using those. Using unique range we may assign range only 10 characters. If we want to encrypt a large number of characters, then every character has to assign a range with their group range of hundred, thousand and so on. Long textual message which have to encrypt, is subdivided into a number of groups with few characters. Then the group of characters is encrypted into floating point numbers concurrently to their group range by using arithmetic coding, where they are automatically compressed. Depending on key, the data bits from text are placed to some suitable nonlinear pixel and bit positions about the image. In the proposed technique, the key length and the number of characters for any encryption process is both variable
The method of identifying similar groups of data in a data set is called clustering. Entities in each group are comparatively more similar to entities of that group than those of the other groups.
Fuzzy C-means is an extension of k-means clustering that allows data points to belong to multiple clusters simultaneously. It assigns a membership value between 0 and 1 to each data point for each cluster, indicating the likelihood of membership. The example demonstrates fuzzy C-means clustering on a dataset with 6 data points and 2 clusters, calculating the membership values and distances over multiple iterations until the cluster centroids stabilize.
The document discusses various techniques for clustering and dimensionality reduction of web documents. It introduces machine learning clustering methods like k-means clustering and discusses challenges like handling different cluster sizes and shapes. It also covers dimensionality reduction methods like principal component analysis (PCA) and locality-sensitive hashing that can be used to cluster high dimensional web document datasets by reducing their dimensionality.
IRJET- Chatbot Using Gated End-to-End Memory NetworksIRJET Journal
The document describes a proposed chatbot system that uses a gated end-to-end memory network model for hospital appointment booking. The model is trained on dialog data consisting of user utterances and bot responses related to booking appointments. It uses an attention mechanism over the dialog memory to select relevant parts of the conversation. The model is trained end-to-end to dynamically regulate interactions with the memory. Experiments show it can handle new combinations of fields when booking appointments in a simulated hospital reservation scenario.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document proposes a new digital image encryption technique based on multi-scroll chaotic delay differential equations (DDEs). The technique uses a XOR operation between separated binary planes of a grayscale image and a shuffled attractor image from a DDE. Security keys include DDE parameters like initial conditions, time constants, and simulation time. Experimental results using a 512x512 Lena image in MATLAB demonstrate the DDE dynamics, encryption/decryption security through histograms, power spectrums, and image correlations. Wrong key decryption is also shown. The technique offers potential for simple yet secure image transmission applications.
This document provides an introduction to radial basis function (RBF) interpolation of scattered data. It discusses how RBFs choose basis functions centered at data points to guarantee a well-posed interpolation problem. Common RBF kernels include the multiquadric, inverse multiquadric, and Gaussian functions. While RBF interpolation is guaranteed to have a unique solution, it can still be ill-conditioned depending on the shape parameter choice. Considerations for using RBFs include that the interpolation matrix is dense, requiring optimization of the shape parameter, and interpolation error increases near boundaries.
This document discusses different types of clustering analysis techniques in data mining. It describes clustering as the task of grouping similar objects together. The document outlines several key clustering algorithms including k-means clustering and hierarchical clustering. It provides an example to illustrate how k-means clustering works by randomly selecting initial cluster centers and iteratively assigning data points to clusters and recomputing cluster centers until convergence. The document also discusses limitations of k-means and how hierarchical clustering builds nested clusters through sequential merging of clusters based on a similarity measure.
Counting and sorting are basic tasks that distributed systems rely on. The document discusses different approaches for distributed counting and sorting, including software combining trees, counting networks, and sorting networks. Counting networks like bitonic and periodic networks have depth of O(log2w) where w is the network width. Sorting networks can sort in the same time complexity by exploiting an isomorphism between counting and sorting networks. Sample sorting is also discussed as a way to sort large datasets across multiple threads.
Ieee a secure algorithm for image based information hiding with one-dimension...Akash Rawat
ieee a secure algorithm for image based information hiding with one-dimensional chaotic systems.It used 1 dimensional chaotic system.ieee paper related for image encryption
This document presents the Noise Driven Encryption Algorithm (NDEA) for encrypting data at the bit level. It uses noise waves introduced over the bit patterns of plaintext to encrypt it. Windows of prime sized layers are selected from the bit patterns, and noise waves are made to propagate within the windows in concentric circular fashion, encrypting the text. For decryption, the reverse process is followed. The algorithm aims to make decryption complex by introducing randomization in window selection and noise propagation coordinates and intensity. It is claimed to produce unpredictable encrypted text that can be used for encrypting passwords and bank transactions. Pseudocodes for the encryption and decryption algorithms are also provided.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
k-Means is a rather simple but well known algorithms for grouping objects, clustering. Again all objects need to be represented as a set of numerical features. In addition the user has to specify the number of groups (referred to as k) he wishes to identify. Each object can be thought of as being represented by some feature vector in an n dimensional space, n being the number of all features used to describe the objects to cluster. The algorithm then randomly chooses k points in that vector space, these point serve as the initial centers of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually the distance measure is chosen by the user and determined by the learning task. After that, for each cluster a new center is computed by averaging the feature vectors of all objects assigned to it. The process of assigning objects and recomputing centers is repeated until the process converges. The algorithm can be proven to converge after a finite number of iterations. Several tweaks concerning distance measure, initial center choice and computation of new average centers have been explored, as well as the estimation of the number of clusters k. Yet the main principle always remains the same. In this project we will discuss about K-means clustering algorithm, implementation and its application to the problem of unsupervised learning
Hierarchical clustering is a method of partitioning a set of data into meaningful sub-classes or clusters. It involves two approaches - agglomerative, which successively links pairs of items or clusters, and divisive, which starts with the whole set as a cluster and divides it into smaller partitions. Agglomerative Nesting (AGNES) is an agglomerative technique that merges clusters with the least dissimilarity at each step, eventually combining all clusters. Divisive Analysis (DIANA) is the inverse, starting with all data in one cluster and splitting it until each data point is its own cluster. Both approaches can be visualized using dendrograms to show the hierarchical merging or splitting of clusters.
Adaptive blind multiuser detection under impulsive noise using principal comp...csandit
The document describes an adaptive blind multiuser detection method for asynchronous code division multiple access (CDMA) systems using principal component analysis (PCA) in impulsive noise environments. PCA is used to extract the principal components from the received signal without requiring training sequences or prior knowledge of channel characteristics. The PCA blind multiuser detector provides robust performance compared to knowledge-based detectors when signature waveforms and timing offsets of users are unknown. Simulation results show the proposed PCA method offers substantial gains over traditional subspace methods for multiuser detection.
ADAPTIVE BLIND MULTIUSER DETECTION UNDER IMPULSIVE NOISE USING PRINCIPAL COMP...csandit
In this paper we consider blind signal detection for an asynchronous code division multiple access (CDMA) system with Principal component analysis (PCA) in impulsive noise. The blind multiuser detector requires no training sequences compared with the conventional multiuser detection receiver. The proposed PCA blind multiuser detector is robust when compared with knowledge based signature waveforms and the timing of the user of interest. PCA is a statistical method for reducing the dimension of data set, spectral decomposition of the covariance matrix of the dataset i.e first and second order statistics are estimated.
Principal component analysis makes no assumption on the independence of the data vectors PCA searches for linear combinations with the largest variances and when several linear combinations are needed, it considers variances in decreasing order of importance. PCA
improves SNR of signals used for differential side channel analysis. In different to other approaches, the linear minimum mean-square-error (MMSE) detector is obtained blindly; the detector does not use any training sequence like in subspace methods to detect multi user
receiver. The algorithm need not estimate the subspace rank in order to reduce the computational complexity. Simulation results show that the new algorithm offers substantial performance gains over the traditional subspace methods.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Digital Watermarking through Embedding of Encrypted and Arithmetically Compre...IJNSA Journal
In this paper, we have encrypted a text to an array of data bits through arithmetic coding technique. For this, we have assigned a unique range for both, a number of characters and groups using those. Using unique range we may assign range only 10 characters. If we want to encrypt a large number of characters, then every character has to assign a range with their group range of hundred, thousand and so on. Long textual message which have to encrypt, is subdivided into a number of groups with few characters. Then the group of characters is encrypted into floating point numbers concurrently to their group range by using arithmetic coding, where they are automatically compressed. Depending on key, the data bits from text are placed to some suitable nonlinear pixel and bit positions about the image. In the proposed technique, the key length and the number of characters for any encryption process is both variable
The method of identifying similar groups of data in a data set is called clustering. Entities in each group are comparatively more similar to entities of that group than those of the other groups.
Fuzzy C-means is an extension of k-means clustering that allows data points to belong to multiple clusters simultaneously. It assigns a membership value between 0 and 1 to each data point for each cluster, indicating the likelihood of membership. The example demonstrates fuzzy C-means clustering on a dataset with 6 data points and 2 clusters, calculating the membership values and distances over multiple iterations until the cluster centroids stabilize.
The document discusses various techniques for clustering and dimensionality reduction of web documents. It introduces machine learning clustering methods like k-means clustering and discusses challenges like handling different cluster sizes and shapes. It also covers dimensionality reduction methods like principal component analysis (PCA) and locality-sensitive hashing that can be used to cluster high dimensional web document datasets by reducing their dimensionality.
IRJET- Chatbot Using Gated End-to-End Memory NetworksIRJET Journal
The document describes a proposed chatbot system that uses a gated end-to-end memory network model for hospital appointment booking. The model is trained on dialog data consisting of user utterances and bot responses related to booking appointments. It uses an attention mechanism over the dialog memory to select relevant parts of the conversation. The model is trained end-to-end to dynamically regulate interactions with the memory. Experiments show it can handle new combinations of fields when booking appointments in a simulated hospital reservation scenario.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document proposes a new digital image encryption technique based on multi-scroll chaotic delay differential equations (DDEs). The technique uses a XOR operation between separated binary planes of a grayscale image and a shuffled attractor image from a DDE. Security keys include DDE parameters like initial conditions, time constants, and simulation time. Experimental results using a 512x512 Lena image in MATLAB demonstrate the DDE dynamics, encryption/decryption security through histograms, power spectrums, and image correlations. Wrong key decryption is also shown. The technique offers potential for simple yet secure image transmission applications.
This document provides an introduction to radial basis function (RBF) interpolation of scattered data. It discusses how RBFs choose basis functions centered at data points to guarantee a well-posed interpolation problem. Common RBF kernels include the multiquadric, inverse multiquadric, and Gaussian functions. While RBF interpolation is guaranteed to have a unique solution, it can still be ill-conditioned depending on the shape parameter choice. Considerations for using RBFs include that the interpolation matrix is dense, requiring optimization of the shape parameter, and interpolation error increases near boundaries.
This document discusses different types of clustering analysis techniques in data mining. It describes clustering as the task of grouping similar objects together. The document outlines several key clustering algorithms including k-means clustering and hierarchical clustering. It provides an example to illustrate how k-means clustering works by randomly selecting initial cluster centers and iteratively assigning data points to clusters and recomputing cluster centers until convergence. The document also discusses limitations of k-means and how hierarchical clustering builds nested clusters through sequential merging of clusters based on a similarity measure.
Counting and sorting are basic tasks that distributed systems rely on. The document discusses different approaches for distributed counting and sorting, including software combining trees, counting networks, and sorting networks. Counting networks like bitonic and periodic networks have depth of O(log2w) where w is the network width. Sorting networks can sort in the same time complexity by exploiting an isomorphism between counting and sorting networks. Sample sorting is also discussed as a way to sort large datasets across multiple threads.
Ieee a secure algorithm for image based information hiding with one-dimension...Akash Rawat
ieee a secure algorithm for image based information hiding with one-dimensional chaotic systems.It used 1 dimensional chaotic system.ieee paper related for image encryption
This document presents the Noise Driven Encryption Algorithm (NDEA) for encrypting data at the bit level. It uses noise waves introduced over the bit patterns of plaintext to encrypt it. Windows of prime sized layers are selected from the bit patterns, and noise waves are made to propagate within the windows in concentric circular fashion, encrypting the text. For decryption, the reverse process is followed. The algorithm aims to make decryption complex by introducing randomization in window selection and noise propagation coordinates and intensity. It is claimed to produce unpredictable encrypted text that can be used for encrypting passwords and bank transactions. Pseudocodes for the encryption and decryption algorithms are also provided.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
k-Means is a rather simple but well known algorithms for grouping objects, clustering. Again all objects need to be represented as a set of numerical features. In addition the user has to specify the number of groups (referred to as k) he wishes to identify. Each object can be thought of as being represented by some feature vector in an n dimensional space, n being the number of all features used to describe the objects to cluster. The algorithm then randomly chooses k points in that vector space, these point serve as the initial centers of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually the distance measure is chosen by the user and determined by the learning task. After that, for each cluster a new center is computed by averaging the feature vectors of all objects assigned to it. The process of assigning objects and recomputing centers is repeated until the process converges. The algorithm can be proven to converge after a finite number of iterations. Several tweaks concerning distance measure, initial center choice and computation of new average centers have been explored, as well as the estimation of the number of clusters k. Yet the main principle always remains the same. In this project we will discuss about K-means clustering algorithm, implementation and its application to the problem of unsupervised learning
Hierarchical clustering is a method of partitioning a set of data into meaningful sub-classes or clusters. It involves two approaches - agglomerative, which successively links pairs of items or clusters, and divisive, which starts with the whole set as a cluster and divides it into smaller partitions. Agglomerative Nesting (AGNES) is an agglomerative technique that merges clusters with the least dissimilarity at each step, eventually combining all clusters. Divisive Analysis (DIANA) is the inverse, starting with all data in one cluster and splitting it until each data point is its own cluster. Both approaches can be visualized using dendrograms to show the hierarchical merging or splitting of clusters.
Adaptive blind multiuser detection under impulsive noise using principal comp...csandit
The document describes an adaptive blind multiuser detection method for asynchronous code division multiple access (CDMA) systems using principal component analysis (PCA) in impulsive noise environments. PCA is used to extract the principal components from the received signal without requiring training sequences or prior knowledge of channel characteristics. The PCA blind multiuser detector provides robust performance compared to knowledge-based detectors when signature waveforms and timing offsets of users are unknown. Simulation results show the proposed PCA method offers substantial gains over traditional subspace methods for multiuser detection.
ADAPTIVE BLIND MULTIUSER DETECTION UNDER IMPULSIVE NOISE USING PRINCIPAL COMP...csandit
In this paper we consider blind signal detection for an asynchronous code division multiple access (CDMA) system with Principal component analysis (PCA) in impulsive noise. The blind multiuser detector requires no training sequences compared with the conventional multiuser detection receiver. The proposed PCA blind multiuser detector is robust when compared with knowledge based signature waveforms and the timing of the user of interest. PCA is a statistical method for reducing the dimension of data set, spectral decomposition of the covariance matrix of the dataset i.e first and second order statistics are estimated.
Principal component analysis makes no assumption on the independence of the data vectors PCA searches for linear combinations with the largest variances and when several linear combinations are needed, it considers variances in decreasing order of importance. PCA
improves SNR of signals used for differential side channel analysis. In different to other approaches, the linear minimum mean-square-error (MMSE) detector is obtained blindly; the detector does not use any training sequence like in subspace methods to detect multi user
receiver. The algorithm need not estimate the subspace rank in order to reduce the computational complexity. Simulation results show that the new algorithm offers substantial performance gains over the traditional subspace methods.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Analytical study and implementation of digital excitation system for diesel g...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Hardback solution to accelerate multimedia computation through mgp in cmpeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses using GIS to assess topographical aspects for locating infrastructure facilities in hilly regions. It notes that traditional 2D maps and sketches used by engineers do not fully consider topography. The study develops a GIS-based methodology to analyze topographical factors and locate proposed facilities at a college campus in India as a case study. The objectives are to model the existing topography and facilities in 3D using GIS to identify suitable and adverse locations for new infrastructure, allowing more sustainable development of the hilly region.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses several dynamic thresholding approaches for segmenting continuous Bangla speech sentences into words or subwords. It proposes using k-means clustering, fuzzy c-means clustering (FCM), and Otsu's thresholding method to determine optimal thresholds for segmentation. K-means and FCM clustering produce better segmentation results than Otsu's method. The algorithms are implemented in MATLAB and achieve an average segmentation accuracy of 94%. Blocking black areas and boundary detection techniques are used to properly detect word boundaries in continuous speech and label the segmented units.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Enhanced equally distributed load balancing algorithm for cloud computingeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Design and development of high frequency resonant transition convertereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Distance protection of hvdc transmission line with novel fault location techn...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
“Optimizing the data transmission between multiple nodes during link failure ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
“Development and performance analysis of a multi evaporating system”eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
1) The document discusses the Roucairol and Carvalho optimization approach for the Ricart-Agrawala distributed mutual exclusion algorithm.
2) The optimization allows a site to enter the critical section multiple times without re-requesting permission, reducing messages to 0-2(N-1) per critical section.
3) However, this compromises fairness by allowing a site to monopolize the critical section, potentially causing starvation for other sites' requests.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A novel method for detecting and characterizing low velocity impact (lvi) in ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Energy efficient task scheduling algorithms for cloud data centerseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IRJET- Low Complexity Pipelined FFT Design for High Throughput and Low Densit...IRJET Journal
This document describes a low complexity pipelined FFT design for high throughput applications. It proposes a feedforward FFT architecture based on rotator allocation to reduce the number and complexity of rotators. The key aspects are:
1) It uses a divide-and-conquer approach to split the FFT computation into stages, with butterflies operating on data whose indexes differ in the stage bit position.
2) It allocates the index bits into serial and parallel dimensions to optimize the distribution of rotations across stages. This aims to minimize the number of rotators and keep rotations in the same serial allocation set.
3) The proposed approach is shown to reduce the number and complexity of rotators in the FFT architecture compared
Iaetsd computational performances of ofdm usingIaetsd Iaetsd
This document discusses computational performances of OFDM using different pruned radix FFT algorithms. It introduces various FFT techniques such as radix-2, radix-4, radix-8, mixed radix and split radix. It then proposes an input zero traced radix DIF FFT pruning (IZTFFTP) algorithm to improve the efficiency of these FFT techniques when there are many zero valued inputs in OFDM. The computational complexity of implementing different radix FFTs with and without this pruning technique is calculated, and results show pruning provides more efficient OFDM performance in terms of reducing calculations.
Resourceful fast dht algorithm for vlsi implementation by split radix algorithmeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Performance evaluations of grioryan fft and cooley tukey fft onto xilinx virt...csandit
A large family of signal processing techniques consist of Fourier-transforming a signal,
manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
We widely use Fourier frequency analysis in equalization of audio recordings, X-ray
crystallography, artefact removal in Neurological signal and image processing, Voice Activity
Detection in Brain stem speech evoked potentials, speech processing spectrograms are used to
identify phonetic sounds and so on. Discrete Fourier Transform (DFT) is a principal
mathematical method for the frequency analysis. The way of splitting the DFT gives out various
fast algorithms. In this paper, we present the implementation of two fast algorithms for the DFT
for evaluating their performance. One of them is the popular radix-2 Cooley-Tukey fast Fourier
transform algorithm (FFT) [1] and the other one is the Grigoryan FFT based on the splitting by
the paired transform [2]. We evaluate the performance of these algorithms by implementing
them on the Xilinx Virtex-II pro [3] and Virtex-5 [4] FPGAs, by developing our own FFT
processor architectures. Finally we show that the Grigoryan FFT is working fatser than
Cooley-Tukey FFT, consequently it is useful for higher sampling rates. Operating at higher
sampling rates is a challenge in DSP applications.
PERFORMANCE EVALUATIONS OF GRIORYAN FFT AND COOLEY-TUKEY FFT ONTO XILINX VIRT...cscpconf
A large family of signal processing techniques consist of Fourier-transforming a signal,manipulating the Fourier-transformed data in a simple way, and reversing the transformation.We widely use Fourier frequency analysis in equalization of audio recordings, X-ray crystallography, artefact removal in Neurological signal and image processing, Voice Activity Detection in Brain stem speech evoked potentials, speech processing spectrograms are used to identify phonetic sounds and so on. Discrete Fourier Transform (DFT) is a principal mathematical method for the frequency analysis. The way of splitting the DFT gives out various fast algorithms. In this paper, we present the implementation of two fast algorithms for the DFT for evaluating their performance. One of them is the popular radix-2 Cooley-Tukey fast Fourier transform algorithm (FFT) [1] and the other one is the Grigoryan FFT based on the splitting by the paired transform [2]. We evaluate the performance of these algorithms by implementing
them on the Xilinx Virtex-II pro [3] and Virtex-5 [4] FPGAs, by developing our own FFT processor architectures. Finally we show that the Grigoryan FFT is working fatser than
Cooley-Tukey FFT, consequently it is useful for higher sampling rates. Operating at higher
sampling rates is a challenge in DSP applications
This summary provides the key details from the document in 3 sentences:
The document proposes a new method for encrypting two images into a single encrypted image using generalized weighted fractional Fourier transform (GWFRFT) with double random phase encoding. The encryption process involves applying pixel scrambling, phase encoding, and two rounds of GWFRFT with random phase masks on the combined image signal. This technique is shown to provide comparable security to the Advanced Encryption Standard (AES) with a 232-bit key size through a high number of possible permutations in the GWFRFT parameters and orders.
This document summarizes a research paper that proposes a novel architecture for implementing a 1D lifting integer wavelet transform (IWT) using residue number system (RNS). The key aspects covered are:
1) RNS offers advantages over binary representations for digital signal processing by avoiding carry propagation. A ROM-based approach is proposed for RNS division.
2) The lifting scheme for discrete wavelet transforms is summarized, including split, predict, and update stages.
3) A novel RNS-based architecture is proposed using three main blocks - split, predict, and update - that repeat at each decomposition level. Pipelined implementations of the predict and update blocks are detailed.
Design of Scalable FFT architecture for Advanced Wireless Communication Stand...IOSRJECE
Now a day’s numerous wireless communication standards have raised additional stringent requirements on each throughput and flexibility for FFT computation. Advanced wireless systems support multiple standards to satisfy the demands of user application necessities. A wireless system whereas supporting multiple standards should also satisfy performance necessities of these supported standards. Meeting performance requirements of multiple standards is a challenge while designing a system. Fast Fourier transformations, a kernel processing task in communication systems, are studied intensively for efficient software and hardware implementations. To design an efficient system, it's necessary to efficiently design its performance critical component. each system must meet stringent design parameters like high speed, low power, low area, low cost, high flexibility and high scalability, designing FFT processor to support multiple wireless standards whereas meeting the above such performance necessities is a difficult task. This paper proposed a highly efficient scalable architecture, software tools design, and design implementation. The reconstruction of the FFT computation flow is design into a scalable structure. The FFT can be easily expanded for any-point FFT computation. The various parameters satisfied the conditions, gives proper and efficient outputs as compare to other platforms.
Ajay Kumar.Ph.D Research scholar at National Institute of Technology my mail id:-- ajaymodaliger@gmail.com
In this presentation slide i have Explained how to reducing Computational time complexity of Discrete Fourier transform(DFT) from O(n^2 ) to nlogn through Radix-2 .FFT Algorithm in this work i have also introduced how we can use Radix-2 FFT in encrypted signal processing application by considering homomarphic properties(RSA) of Paillier cryptosystem.
Mapping between Discrete Cosine Transform of Type-VI/VII and Discrete Fourier...IJERA Editor
In this paper, the mapping between discrete cosine transform of types VI and VII (DCT-VI/VII) of even length
N and (2N – 1)-point one dimensional discrete Fourier transform (1D-DFT) is presented. The technique used in
this paper is the mapping between the real-valued data sequence to an intermediate sequence used as an input to
DFT
IRJET - Design and Implementation of FFT using Compressor with XOR Gate TopologyIRJET Journal
This document describes a design for implementing a Fast Fourier Transform (FFT) using an adder compressor with a new XOR gate topology. The goals are to increase power efficiency, reduce logic utilization (LUTs), and decrease time complexity/delay compared to other FFT implementations. An adder compressor is proposed that uses XOR gates to compress 4 input bits into 2 output bits (sum and carry), allowing parallel addition without carry propagation. Simulation results on a Xilinx FPGA show the compressor-based FFT uses fewer LUTs, consumes less power, and has a shorter delay compared to an FFT using a Booth multiplier.
Implementation Of Grigoryan FFT For Its Performance Case Study Over Cooley-Tu...ijma
This document discusses the implementation and performance comparison of two fast Fourier transform (FFT) algorithms - the Cooley-Tukey FFT and the Grigoryan FFT - on three Xilinx FPGAs. The Grigoryan FFT uses a decomposition based on paired transforms, while the Cooley-Tukey FFT uses a radix-2 decomposition. Both algorithms were implemented on Virtex-II Pro, Virtex-5, and Virtex-4 FPGAs. The results showed that the Grigoryan FFT operated at higher sampling rates and was faster than the Cooley-Tukey FFT. Additionally, the Virtex-5 FPGA provided the highest speed for implementing the Grigoryan FFT compared
This document presents an implementation of the Fast Fourier Transform (FFT) algorithm. It begins with an introduction to FFTs, explaining that they can compute the Discrete Fourier Transform (DFT) much more efficiently than direct evaluation, reducing the computation time from O(N^2) to O(N log N). It then describes the basic butterfly structures used in FFTs and shows how to implement 16-point FFT blocks. The document includes MATLAB code for an 8-point DFT and FFT, as well as VHDL code for a 16-point FFT processor. It provides details on decimation-in-time and decimation-in-frequency algorithms and how they recursively break down the DFT into smaller transforms
This document presents a new Radix-4 FFT algorithm that reduces the number of operations by 6% compared to the standard Radix-4 FFT algorithm. The new algorithm uses a scaling factor to divide and multiply terms, avoiding some multiplications while maintaining arithmetic accuracy. Simulation results on a 4096 point DFT show the new algorithm produces identical results to the standard Radix-4 FFT in terms of magnitude and phase. The new algorithm is suitable for ASIC implementations as it is symmetric unlike split radix FFTs.
A novel technique for speech encryption based on k-means clustering and quant...journalBEEI
This document proposes a new algorithm for speech encryption that uses quantum chaotic maps, k-means clustering, and two stages of scrambling. The first stage uses a tent map to scramble bits in the binary representation of the signal. The second stage uses k-means clustering to scramble blocks of the signal. A quantum logistic map is used to generate an encryption key. The proposed method is evaluated using statistical quality metrics and is shown to provide secure and efficient speech encryption while maintaining high quality of recovered speech.
Implementation and validation of multiplier less fpga based digital filterIAEME Publication
This document describes an implementation of a finite impulse response (FIR) filter using distributed arithmetic on a field programmable gate array (FPGA). Distributed arithmetic replaces multiplications with lookup tables, reducing complexity. Typically, lookup table size increases exponentially with filter order. The paper proposes using offset binary coding to reduce the lookup table size by a factor of 2 to 2N-1. Simulation results show this implementation requires less FPGA resources than a conventional multiply-accumulate approach.
Performance Analysis of OFDM Transceiver with Folded FFT and LMS Filteridescitation
This paper proposes an OFDM transceiver that uses a folded FFT and LMS filter to reduce power consumption and hardware complexity compared to a traditional OFDM system. A folded FFT architecture is developed using folding transformation and register minimization techniques. This leads to less hardware usage and lower power consumption by exploiting redundancies in FFT computation. An LMS filter is also designed to remove noise. The performance of the proposed OFDM transceiver is analyzed in terms of error rate to validate the advantages of lower power and smaller hardware size compared to a conventional OFDM system.
IRJET- Implementation of Reversible Radix-2 FFT VLSI Architecture using P...IRJET Journal
This document presents the implementation of a reversible radix-2 FFT VLSI architecture using programmable reversible gates. It discusses two methods for designing a radix-2 FFT: 1) using reversible Peres and TR gates and 2) using a reversible DKG gate. Simulation results for 8-point, 16-point and 32-point radix-2 DIT FFT designs implemented on a Xilinx FPGA using the proposed reversible gates are presented. The document concludes that FFT is an important DSP algorithm for OFDM applications and that combining OFDM with MIMO can improve the data rates of communication systems.
Ecc cipher processor based on knapsack algorithmAlexander Decker
This document describes a method for encrypting messages using Elliptic Curve Cryptography (ECC) combined with the knapsack algorithm. It begins by explaining the basics of ECC, including defining elliptic curves over a finite field and describing point addition and doubling operations. It then presents algorithms for the full encryption/decryption process. The process involves first transforming the message into points on an elliptic curve, then applying the knapsack algorithm to further encrypt the ECC-encrypted message before transmission. Decryption reverses these steps to recover the original message. The combination of ECC and knapsack encryption is presented as an innovation that provides increased security over traditional ECC alone.
Similar to Text file encryption using fft technique in lab view 8.6 (20)
Hudhud cyclone caused extensive damage in Visakhapatnam, India in October 2014, especially to tree cover. This will likely impact the local environment in several ways: increased air pollution as trees absorb less; higher temperatures without tree canopy; increased erosion and landslides. It also created large amounts of waste from destroyed trees. Proper management of solid waste is needed to prevent disease spread. Suggested measures include restoring damaged plants, building fountains to reduce heat, mandating light-colored buildings, improving waste management, and educating public on health risks. Overall, changes are needed to water, land, and waste practices to rebuild the environment after the cyclone removed green cover.
Impact of flood disaster in a drought prone area – case study of alampur vill...eSAT Publishing House
1) In September-October 2009, unprecedented heavy rainfall and dam releases caused widespread flooding in Alampur village in Mahabub Nagar district, a historically drought-prone area.
2) The flood damaged or destroyed homes, buildings, infrastructure, crops, and documents. It displaced many residents and cut off the village.
3) The socioeconomic conditions and mud-based construction of homes in the village exacerbated the flood's impacts, making damage more severe and recovery more difficult.
The document summarizes the Hudhud cyclone that struck Visakhapatnam, India in October 2014. It describes the cyclone's formation, rapid intensification to winds of 175 km/h, and landfall near Visakhapatnam. The cyclone caused extensive damage estimated at over $1 billion and at least 109 deaths in India and Nepal. Infrastructure like buildings, bridges, and power lines were destroyed. Crops and fishing boats were also damaged. The document then discusses coping strategies and improvements needed to disaster management plans to better prepare for future cyclones.
Groundwater investigation using geophysical methods a case study of pydibhim...eSAT Publishing House
This document summarizes the results of a geophysical investigation using vertical electrical sounding (VES) methods at 13 locations around an industrial area in India. The VES data was interpreted to generate geo-electric sections and pseudo-sections showing subsurface resistivity variations. Three main layers were typically identified - a high resistivity topsoil, a weathered middle layer, and a basement rock. Pseudo-sections revealed relatively more weathered areas in the northwest and southwest. Resistivity sections helped identify zones of possible high groundwater potential based on low resistivity anomalies sandwiched between more resistive layers. The study concluded the electrical resistivity method was useful for understanding subsurface geology and identifying areas prospective for groundwater exploration.
Flood related disasters concerned to urban flooding in bangalore, indiaeSAT Publishing House
1. The document discusses urban flooding in Bangalore, India. It describes how factors like heavy rainfall, population growth, and improper land use have contributed to increased flooding in the city.
2. Flooding events in 2013 are analyzed in detail. A November rainfall caused runoff six times higher than the drainage capacity, inundating low-lying residential areas.
3. Impacts of urban flooding include disrupted daily life, damaged infrastructure, and decreased economic activity in affected areas. The document calls for improved flood management strategies to better mitigate urban flooding risks in Bangalore.
Enhancing post disaster recovery by optimal infrastructure capacity buildingeSAT Publishing House
This document discusses enhancing post-disaster recovery through optimal infrastructure capacity building. It presents a model to minimize the cost of meeting demand using auxiliary capacities when disaster damages infrastructure. The model uses genetic algorithms to select optimal capacity combinations. The document reviews how infrastructure provides vital services supporting recovery activities and discusses classifying infrastructure into six types. When disaster reduces infrastructure services, a gap forms between community demands and available support, hindering recovery. The proposed research aims to identify this gap and optimize capacity selection to fill it cost-effectively.
Effect of lintel and lintel band on the global performance of reinforced conc...eSAT Publishing House
This document analyzes the effect of lintels and lintel bands on the seismic performance of reinforced concrete masonry infilled frames through non-linear static pushover analysis. Four frame models are considered: a frame with a full masonry infill wall; a frame with a central opening but no lintel/band; a frame with a lintel above the opening; and a frame with a lintel band above the opening. The results show that the full infill wall model has 27% higher stiffness and 32% higher strength than the model with just an opening. Models with lintels or lintel bands have slightly higher strength and stiffness than the model with just an opening. The document concludes lintels and lintel
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...eSAT Publishing House
1) A cyclone with wind speeds of 175-200 kph caused massive damage to the green cover of Gitam University campus in Visakhapatnam, India. Thousands of trees were uprooted or damaged.
2) A study assessed different types of damage to trees from the cyclone, including defoliation, salt spray damage, damage to stems/branches, and uprooting. Certain tree species were more vulnerable than others.
3) The results of the study can help in selecting more wind-resistant tree species for future planting and reducing damage from future storms.
Wind damage to buildings, infrastrucuture and landscape elements along the be...eSAT Publishing House
1) A visual study was conducted to assess wind damage from Cyclone Hudhud along the 27km Visakha-Bheemli Beach road in Visakhapatnam, India.
2) Residential and commercial buildings suffered extensive roof damage, while glass facades on hotels and restaurants were shattered. Infrastructure like electricity poles and bus shelters were destroyed.
3) Landscape elements faced damage, including collapsed trees that damaged pavements, and debris in parks. The cyclone wiped out over half the city's green cover and caused beach erosion around protected areas.
1) The document reviews factors that influence the shear strength of reinforced concrete deep beams, including compressive strength of concrete, percentage of tension reinforcement, vertical and horizontal web reinforcement, aggregate interlock, shear span-to-depth ratio, loading distribution, side cover, and beam depth.
2) It finds that compressive strength of concrete, tension reinforcement percentage, and web reinforcement all increase shear strength, while shear strength decreases as shear span-to-depth ratio increases.
3) The distribution and amount of vertical and horizontal web reinforcement also affects shear strength, but closely spaced stirrups do not necessarily enhance capacity or performance.
Role of voluntary teams of professional engineers in dissater management – ex...eSAT Publishing House
1) A team of 17 professional engineers from various disciplines called the "Griha Seva" team volunteered after the 2001 Gujarat earthquake to provide technical assistance.
2) The team conducted site visits, assessments, testing and recommended retrofitting strategies for damaged structures in Bhuj and Ahmedabad. They were able to fully assess and retrofit 20 buildings in Ahmedabad.
3) Factors observed that exacerbated the earthquake's impacts included unplanned construction, non-engineered buildings, improper prior retrofitting, and defective materials and workmanship. The professional engineers' technical expertise was crucial for effective post-disaster management.
This document discusses risk analysis and environmental hazard management. It begins by defining risk, hazard, and toxicity. It then outlines the steps involved in hazard identification, including HAZID, HAZOP, and HAZAN. The document presents a case study of a hypothetical gas collecting station, identifying potential accidents and hazards. It discusses quantitative and qualitative approaches to risk analysis, including calculating a fire and explosion index. The document concludes by discussing hazard management strategies like preventative measures, control measures, fire protection, relief operations, and the importance of training personnel on safety.
Review study on performance of seismically tested repaired shear wallseSAT Publishing House
This document summarizes research on the performance of reinforced concrete shear walls that have been repaired after damage. It begins with an introduction to shear walls and their failure modes. The literature review then discusses the behavior of original shear walls as well as different repair techniques tested by other researchers, including conventional repair with new concrete, jacketing with steel plates or concrete, and use of fiber reinforced polymers. The document focuses on evaluating the strength retention of shear walls after being repaired with various methods.
Monitoring and assessment of air quality with reference to dust particles (pm...eSAT Publishing House
This document summarizes a study on monitoring and assessing air quality with respect to dust particles (PM10 and PM2.5) in the urban environment of Visakhapatnam, India. Sampling was conducted in residential, commercial, and industrial areas from October 2013 to August 2014. The average PM2.5 and PM10 concentrations were within limits in residential areas but moderate to high in commercial and industrial areas. Exceedance factor levels indicated moderate pollution for residential areas and moderate to high pollution for commercial and industrial areas. There is a need for management measures like improved public transport and green spaces to combat particulate air pollution in the study areas.
Low cost wireless sensor networks and smartphone applications for disaster ma...eSAT Publishing House
This document describes a low-cost wireless sensor network and smartphone application system for disaster management. The system uses an Arduino-based wireless sensor network comprising nodes with various sensors to monitor the environment. The sensor data is transmitted to a central gateway and then to the cloud for analysis. A smartphone app connected to the cloud can detect disasters from the sensor data and send real-time alerts to users to help with early evacuation. The system aims to provide low-cost localized disaster detection and warnings to improve safety.
Coastal zones – seismic vulnerability an analysis from east coast of indiaeSAT Publishing House
This document summarizes an analysis of seismic vulnerability along the east coast of India. It discusses the geotectonic setting of the region as a passive continental margin and reports some moderate seismic activity from offshore in recent decades. While seismic stability cannot be assumed given events like the 2004 tsunami, no major earthquakes have been recorded along this coast historically. The document calls for further study of active faults, neotectonics, and implementation of improved seismic building codes to mitigate vulnerability.
Can fracture mechanics predict damage due disaster of structureseSAT Publishing House
This document discusses how fracture mechanics can be used to better predict damage and failure of structures. It notes that current design codes are based on small-scale laboratory tests and do not account for size effects, which can lead to more brittle failures in larger structures. The document outlines how fracture mechanics considers factors like size effect, ductility, and minimum reinforcement that influence the strength and failure behavior of structures. It provides examples of how fracture mechanics has been applied to problems like evaluating shear strength in deep beams and investigating a failure of an oil platform structure. The document argues that fracture mechanics provides a more scientific basis for structural design compared to existing empirical code provisions.
This document discusses the assessment of seismic susceptibility of reinforced concrete (RC) buildings. It begins with an introduction to earthquakes and the importance of vulnerability assessment in mitigating earthquake risks and losses. It then describes modeling the nonlinear behavior of RC building elements and performing pushover analysis to evaluate building performance. The document outlines modeling RC frames and developing moment-curvature relationships. It also summarizes the results of pushover analyses on sample 2D and 3D RC frames with and without shear walls. The conclusions emphasize that pushover analysis effectively assesses building properties but has limitations, and that capacity spectrum method provides appropriate results for evaluating building response and retrofitting impact.
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...eSAT Publishing House
1) A 6.0 magnitude earthquake occurred off the coast of Paradip, Odisha in the Bay of Bengal on May 21, 2014 at a depth of around 40 km.
2) Analysis of magnetic and bathymetric data from the area revealed the presence of major lineaments in NW-SE and NE-SW directions that may be responsible for seismic activity through stress release.
3) Movements along growth faults at the margins of large Bengal channels, due to large sediment loads, could also contribute to seismic events by triggering movements along the faults.
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...eSAT Publishing House
This document discusses the effects of Cyclone Hudhud on the development of Visakhapatnam as a smart and green city through a case study and preliminary surveys. The surveys found that 31% of participants had experienced cyclones, 9% floods, and 59% landslides previously in Visakhapatnam. Awareness of disaster alarming systems increased from 14% before the 2004 tsunami to 85% during Cyclone Hudhud, while awareness of disaster management systems increased from 50% before the tsunami to 94% during Hudhud. The surveys indicate that initiatives after the tsunami improved awareness and preparedness. Developing Visakhapatnam as a smart, green city should consider governance
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Text file encryption using fft technique in lab view 8.6
1. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 29
TEXT FILE ENCRYPTION USING FFT TECHNIQUE IN Lab VIEW 8.6
Sudha Rani. K1
, T. C. Sarma2
, K. Satya Prasad3
1
DEPT of EIE, VNRVJIET, Hyderabad, India, sudhasarah@gmail.com
2
Former Deputy Director, NRSA, Hyderabad, India, sarma_tc@yahoo.com
3
Rector, JNTU Kakinada University, Kakinada, India, prasad_kodati@yahoo.com
Abstract
Encryption has always been a very important part of military communications. Here we deal with digital transmission technique.
Digital transmission is always much more efficient than analog transmission, and it is much easier for digital encryption techniques to
achieve a very high degree of security. Of course, this type of technique is still not quite compatible with today’s technical
environment, i.e. most of the telephone systems are still analog instead of digital; most practical digitizers still require a relatively
high bit rate which cannot be transmitted via standard analog telephone channels; and low bit rate speech digitizers still imply
relatively high complexity and poor quality. Digital transmission adopts “Scrambling” technique. Scrambling methods are considered
as important methods that provide the communication systems a specified degree of security, depending on the used technique to
implement the scrambling method. There are many traditional scrambling methods used in single dimension such as time or frequency
domain scrambling.
Index Terms: Encryptor, Decryptor, Fast Fourier transforms (FFT)
-----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
Encryption has long been used by militaries and governments
to facilitate secret communication. It is now commonly used
in protecting information within many kinds of civilian
systems. For example, the Computer Security Institute
reported that in 2007, 71% of companies surveyed utilized
encryption for some of their data in transit, and 53% utilized
encryption for some of their data in storage. Encryption can be
used to protect data "at rest", such as files on computers and
storage devices (e.g. USB flash drives). In recent years there
have been numerous reports of confidential data such as
customers' personal records being exposed through loss or
theft of laptops or backup drives. Encrypting such files at rest
helps protect them should physical security measures fail.
Digital rights management systems which prevent
unauthorized use or reproduction of copyrighted material and
protect software against reverse engineering are another
somewhat different example of using encryption on data at
rest.
Encryption is also used to protect data in transit, for example
data being transferred via networks (e.g. the Internet, e-
commerce), mobile telephones, wireless microphones,
wireless intercom systems, Bluetooth devices and bank
automatic teller machines. There have been numerous reports
of data in transit being intercepted in recent years. Encrypting
data in transit also helps to secure it as it is often difficult to
physically secure all access to networks. Encryption, by itself,
can protect the confidentiality of messages, but other
techniques are still needed to protect the integrity and
authenticity of a message; for example, verification of a
message authentication code (MAC) or a digital signature.
Standards and cryptographic software and hardware to
perform encryption are widely available, but successfully
using encryption to ensure security may be a challenging
problem. A single slip-up in system design or execution can
allow successful attacks. Sometimes an adversary can obtain
unencrypted information without directly undoing the
encryption.
2. FAST FOURIER TRANSFORM (FFT)
In this section we present several methods for computing the
DFT efficiently. In view of the importance of the DFT in
various digital signal processing applications, such as linear
filtering, correlation analysis, and spectrum analysis, its
efficient computation is a topic that has received considerable
attention by many mathematicians, engineers, and applied
scientists. Basically, the computational problem for the DFT is
to compute the sequence {X(k)} of N complex-valued
numbers given another sequence of data {x(n)} of length N,
according to the formula
In general, the data sequence x(n) is also assumed to be
complex valued. Similarly, The IDFT becomes
2. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 30
Since DFT and IDFT involve basically the same type of
computations, our discussion of efficient computational
algorithms for the DFT applies as well to the efficient
computation of the IDFT.
We observe that for each value of k, direct computation
of X(k) involves N complex multiplications (4N real
multiplications) and N-1 complex additions (4N-2 real
additions). Consequently, to compute all N values of the DFT
requires N 2
complex multiplications and N 2
-N complex
additions.
Direct computation of the DFT is basically inefficient
primarily because it does not exploit the symmetry and
periodicity properties of the phase factor WN. In particular,
these two properties are :
The computationally efficient algorithms described in this
section, known collectively as fast Fourier transform (FFT)
algorithms, exploit these two basic properties of the phase
factor.
2.1Radix-2 FFT Algorithms
Let us consider the computation of the N = 2v
point DFT by
the divide-and conquer approach. We split the N-point data
sequence into two N/2-point data sequences f1(n) and f2(n),
corresponding to the even-numbered and odd-numbered
samples of x(n), respectively, that is,
Thus f1(n) and f2(n) are obtained by decimating x(n) by a
factor of 2, and hence the resulting FFT algorithm is called
a decimation-in-time algorithm.
Now the N-point DFT can be expressed in terms of the DFT's
of the decimated sequences as follows:
But WN
2
= WN/2. With this substitution, the equation can be
expressed as
Where F1(k) and F2(k) are the N/2-point DFTs of the
sequences f1(m) and f2(m), respectively.
Since F1(k) and F2(k) are periodic, with period N/2, we
have F1(k+N/2) = F1(k) and F2(k+N/2) = F2(k). In addition, the
factor WN
k+N/2
= -WN
k
. Hence the equation may be expressed
as
We observe that the direct computation of F1(k) requires
(N/2)2
complex multiplications. The same applies to the
computation of F2(k). Furthermore, there are N/2 additional
complex multiplications required to compute WN
k
F2(k). Hence
the computation of X(k) requires 2(N/2)2
+N/2 = N 2
/2 + N/2
complex multiplications. This first step results in a reduction
of the number of multiplications from N 2
to N 2
/2 + N/2,
which is about a factor of 2 for N large.
Figure-1.1: First step in the decimation-in-time algorithm.
By computing N/4-point DFTs, we would obtain the N/2-point
DFTs F1(k) and F2(k) from the relations
3. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 31
The decimation of the data sequence can be repeated again
and again until the resulting sequences are reduced to one-
point sequences. For N= 2v
, this decimation can be performed
v = log2N times. Thus the total number of complex
multiplications is reduced to (N/2)log2N. The number of
complex additions is Nlog2N.
For illustrative purposes, Figure 1.2 depicts the computation
of N = 8 point DFT. We observe that the computation is
performed in three stages, beginning with the computations of
four two-point DFTs, then two four-point DFTs, and finally,
one eight-point DFT. The combination for the smaller DFTs to
form the larger DFT is illustrated in Figure 2.3 for N = 8.
Figure -1.2: Three stages in the computation of an N = 8-
point DFT.
Figure -1.3 Eight-point decimation-in-time FFT algorithms.
Figure-1.4: basic butterfly computations in the decimation-in-
time FFT algorithm.
An important observation is concerned with the order of the
input data sequence after it is decimated (v-1) times. For
example, if we consider the case where N = 8, we know that
the first decimation yields the sequence x(0), x(2), x(4), x(6),
x(1), x(3), x(5), x(7), and the second decimation results in the
sequence x(0), x(4), x(2), x(6), x(1), x(5), x(3),
x(7). This shuffling of the input data sequence has a well-
defined order as can be ascertained from observing Figure1.5,
which illustrates the decimation of the eight-point sequence.
4. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 32
Figure -1.5: Shuffling of the data and bit reversal.
Another important radix-2 FFT algorithm, called the
decimation-in-frequency algorithm, is obtained by using the
divide-and-conquer approach. To derive the algorithm, we
begin by splitting the DFT formula into two summations, one
of which involves the sum over the first N/2 data points and
the second sum involves the last N/2 data points. Thus we
obtain
Now, let us split (decimate) X(k) into the even- and odd-
numbered samples. Thus we obtain
where we have used the fact that WN
2
= WN/2
The computational procedure above can be repeated through
decimation of the N/2-point DFTs X(2k) and X(2k+1). The
entire process involves v = log2N stages of decimation, where
each stage involves N/2 butterflies of the type shown in
Figure2.7. Consequently, the computation of the N-point DFT
via the decimation-in-frequency FFT requires (N/2)log2N
complex multiplications and Nlog2N complex additions, just
as in the decimation-in-time algorithm. For illustrative
purposes, the eight-point decimation-in-frequency algorithm is
given in Figure 1.8.
Figure-1.6: First stage of the decimation-in-frequency FFT
algorithm.
5. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 33
Figure-1.7: Basic butterfly computation in the decimation in
frequency.
We observe from Figure TC.3.8 that the input data x(n) occurs
in natural order, but the output DFT occurs in bit-reversed
order.
Figure-1.8 N=8-piont decimation-in-frequency FFT
algorithm.
We also note that the computations are performed in place.
However, it is possible to reconfigure the decimation-in-
frequency algorithm so that the input sequence occurs in bit-
reversed order while the output DFT occurs in normal order.
Furthermore, if we abandon the requirement that the
computations be done in place, it is also possible to have both
the input data and the output DFT in normal order.
3. ENCRYPTION:
Encryption is the conversion of data into a form, called a
cipher text that cannot be easily understood by unauthorized
people. Decryption is the process of converting encrypted data
back into its original form, so it can be understood.
The use of encryption/decryption is as old as the art of
communication. In wartime, a cipher, often incorrectly called
a code, can be employed to keep the enemy from obtaining the
contents of transmissions. (Technically, a code is a means of
representing a signal without the intent of keeping it secret;
examples are Morse code and ASCII.) Simple ciphers include
the substitution of letters for numbers, the rotation of letters in
the alphabet, and the "scrambling" of voice signals by
inverting the sideband frequencies. More complex ciphers
work according to sophisticated computer algorithms that
rearrange the data bits in digital signals .Encryption finds its
use in many scenarios as follows:
Figure -3.1: Uses of encryption methods
Encryption refers to algorithmic schemes that encode plain
text into non-readable form or cipher text, providing privacy.
The receiver of the encrypted text uses a "key" to decrypt the
message, returning it to its original plain text form. The key is
the trigger mechanism to the algorithm. Until the advent of the
Internet, encryption was rarely used by the public, but was
largely a military tool. Today, with online marketing, banking,
healthcare and other services, even the average householder is
aware of encryption. As more people realize the open
nature of the Internet, email and instant messaging,
encryption will undoubtedly become more popular.
Without encryption, information passed on the Internet
is not only available for virtually anyone to snag and
read, but is often stored for years on servers that can
change hands or become compromised in any number of
ways. For all of these reasons encryption is a goal worth
pursuing. In order to easily recover the contents of an
encrypted signal, the correct decryption key is required. The
key is an algorithm that undoes the work of the encryption
algorithm. Alternatively, a computer can be used in an attempt
to break the cipher. The more complex the encryption
algorithm, the more difficult it becomes to eavesdrop on the
communications without access to the key. Often there has
been a need to protect information from 'prying eyes'. In the
electronic age, information that could otherwise benefit or
educate a group or individual can also be used against such
groups or individuals. Industrial espionage among highly
competitive businesses often requires that extensive security
6. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 34
measures be put into place. And, those who wish to exercise
their personal freedom, outside of the oppressive nature of
governments, may also wish to encrypt certain information to
avoid suffering the penalties of going against the wishes of
those who attempt to control. Still, the methods of data
encryption and decryption are relatively straightforward, and
easily mastered. I have been doing data encryption since my
college days, when I used an encryption algorithm to store
game programs and system information files on the university
mini-computer, safe from 'prying eyes'. These were files that
raised eyebrows amongst those who did not approve of such
things, but were harmless. I was occasionally asked what this
"rather large file" contained, and I once demonstrated the
program that accessed it, but you needed a password to get to
'certain files' nonetheless. And, some files needed a separate
encryption program to decipher them.
Encryption/decryption is especially important in wireless
communications. This is because wireless circuits are easier to
tap than their hard-wired counterparts. Nevertheless,
encryption/decryption is a good idea when carrying out any
kind of sensitive transaction, such as a credit-card purchase
online, or the discussion of a company secret between
different departments in the organization. The stronger the
cipher -- that is, the harder it is for unauthorized people to
break it -- the better, in general. However, as the strength of
encryption/decryption increases, so does the cost. In recent
years, a controversy has arisen over so-called strong
encryption. This refers to ciphers that are essentially
unbreakable without the decryption keys. While most
companies and their customers view it as a means of keeping
secrets and minimizing fraud, some governments view strong
encryption as a potential vehicle by which terrorists might
evade authorities. These governments, including that of the
United States, want to set up a key-escrow arrangement. This
means everyone who uses a cipher would be required to
provide the government with a copy of the key. Decryption
keys would be stored in a supposedly secure place, used only
by authorities, and used only if backed up by a court order.
Opponents of this scheme argue that criminals could hack into
the key-escrow database and illegally obtain, steal, or alter the
keys. Supporters claim that while this is a possibility,
implementing the key escrow scheme would be better than
doing nothing to prevent criminals from freely using
encryption/decryption.
3.1 Types of Encryption Methods
There are three basic encryption methods: hashing,
symmetric cryptography, and asymmetric cryptography. Each
of these encryption methods have their own uses, advantages,
and disadvantages. All three of these encryption methods use
cryptography, or the science of scrambling data.
Cryptography is used to change readable text, called plaintext,
into an unreadable secret format, called cipher text, using a
process called encryption. Encrypting data provides additional
benefits besides protecting the confidentiality of data. Other
benefits include ensuring that messages have not been altered
during transit and verifying the identity of the message sender.
All these benefits can be realized by using basic encryption
methods.
Hashing:
The first encryption method, called hashing, creates a unique
fixed length signature of a group of data. Hashes are created
with an algorithm, or hash function, and are used to compare
sets of data. Since a hash is unique to a specific message, any
changes to that message would result in a different hash,
thereby alerting a user to potential tampering.
A hash algorithm, also known as a hash function, is a
mathematical procedure used in computer programming to
turn a large section of data into a smaller representational
symbol, known as a hash key. The major use of hash
algorithms occurs in large databases of information. Each
collection of data is assigned a hash key, which is a short
symbol or code that represents it. When a user needs to find
that piece of data, he inputs the symbol or code and the
computer displays the full data piece.
For hashing, as this process is called, to work it needs a hash
function or hash algorithm. This tells the computer how to
take the hash key and match it with a set of data it represents.
Areas in the computer program known as slots or buckets
store information and each key links to a specific slot or
bucket. The entire process is contained within a hash
table or hash map. This table records data and the
matching keys that correspond to it. It then uses a hash
algorithm to connect a key to a piece of data when the
user requests it.
A key difference between a hash and the other two encryption
methods is that once the data is encrypted, the process cannot
be reversed or deciphered. This means that even if a potential
attacker were able to obtain a hash, he would not be able to
use a decryption method to discover the contents of the
original message. Some common hashing algorithms are
Message Digest 5 (MD5) and Secure Hashing Algorithm
(SHA).
3.1.1 Symmetric Cryptography
Symmetric cryptography, which is also called private-key
cryptography, is the second encryption method. The term
"private key" comes from the fact that the key used
to encrypt and decrypt data must remain secure because
anyone with access to it can read the coded messages. This
encryption method can be categorized as either a stream
7. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 35
cipher or a block cipher, depending upon the amount of data
being encrypted or decrypted at a time. A stream cipher
encrypts data one character at a time while a block cipher
processes fixed chunks of data. Common symmetric
encryption algorithms include Data Encryption Standard
(DES), Advanced Encryption Standard (AES), International
Data Encryption Algorithm (IDEA), and Blowfish. For
symmetric key ciphers, there are basically two types: BLOCK
CIPHERS, in which a fixed length block is encrypted, and
STREAM CIPHERS, in which the data is encrypted one 'data
unit' (typically 1 byte) at a time, in the same order it was
received in. Fortunately, the simplest of all of the symmetric
key 'stream cipher' methods is the TRANSLATION TABLE
(or 'S table'), which should easily meet the performance
requirements of even the most performance-intensive
application that requires data to be encrypted. In a translation
table, each 'chunk' of data (usually 1 byte) is used as an offset
within one or more arrays, and the resulting 'translated' value
is then written into the output stream. The encryption and
decryption programs would each use a table that translates to
and from the encrypted data. 80x86 CPU's have an instruction
'XLAT' that lends itself to this purpose.
While translation tables are very simple and fast, the down
side is that once the translation table is known, the code is
broken. Further, such a method is relatively straightforward
for code breakers to decipher - such code methods have been
used for years, even before the advent of the computer. Still,
for general "unread ability" of encoded data, without adverse
effects on performance, the 'translation table' method lends
itself well. A modification to the 'translation table' uses 2 or
more tables, based on the position of the bytes within the data
stream, or on the data stream itself. Decoding becomes more
complex, since you have to reverse the same process reliably.
But, by the use of more than one translation table, especially
when implemented in a 'pseudo-random' order, this adaptation
makes code breaking relatively difficult. An example of this
method might use translation table 'A' on all of the 'even'
bytes, and translation table 'B' on all of the 'odd' bytes. Unless
a potential code breaker knows that there are exactly 2 tables,
even with both source and encrypted data available the
deciphering process is relatively difficult.
Similar to using a translation table, 'data repositioning' lends
itself to use by a computer, but takes considerably more time
to accomplish. This type of cipher would be a trivial example
of a block cipher. A buffer of data is read from the input, then
the order of the bytes (or other 'chunk' size) is rearranged, and
written 'out of order'. The decryption program then reads this
back in, and puts them back 'in order'. Often such a method is
best used in combination with one or more of the other
encryption methods mentioned here, making it even more
difficult for code breakers to determine how to decipher your
encrypted data. As an example, consider an anagram. The
letters are all there, but the order has been changed. Some
anagrams are easier than others to decipher, but a well written
anagram is a brain teaser nonetheless, especially if it's
intentionally misleading.
High entropy data is difficult to extract information from, and
the higher the entropy, the better the cipher. So, if you rotate
the words or bytes within a data stream, using a method that
involves multiple and variable direction and duration of
rotation, in an easily reproducible pattern, you can quickly
encode a stream of data with a method that can be nearly
impossible to break. Further, if you use an 'XOR mask' in
combination with this ('flipping' the bits in certain positions
from 1 to 0, or 0 to 1) you end up making the code breaking
process even more difficult. The best combination would also
use 'pseudo random' effects, the easiest of which might
involve a simple sequence like Fibonacci numbers, which can
appear 'pseudo-random' after many iterations of 'modular'
arithmetic (i.e. math that 'wraps around' after reaching a limit,
like integer math on a computer). The Fibonacci sequence
'1,1,2,3,5,...' is easily generated by adding the previous 2
numbers in the sequence to get the next. Doing modular
arithmetic on the result and operating on multiple byte
sequences (using a prime number of bytes for block rotation,
as one example) would make the code breaker's job even more
difficult, adding the 'pseudo-random' effect that is easily
reproduced by your decryption program.
3.1.2 Asymmetric Cryptography
Asymmetric or public key, cryptography is the last encryption
method. This type of cryptography uses two keys, a private
key and a public key, to perform encryption and decryption.
The use of two keys overcomes a major weakness in
symmetric key cryptography in that a single key does not need
to be securely managed among multiple users. In asymmetric
cryptography, a public key is freely available to everyone
while the private key remains with receiver of cipher text to
decrypt messages. Algorithms that use public key
cryptography include RSA and Diffie-Hellman.
The advantage of asymmetric over symmetric key
encryption, where the same key is used to encrypt and
decrypt a message, is that secure messages can be sent
between two parties over a non-secure communication
channel without initially sharing secret information. The
disadvantages are that encryption and decryption is slow,
and cipher text potentially may be hacked by a
cryptographer given enough computing time and power.
One very important feature of a good encryption scheme is the
ability to specify a 'key' or 'password' of some kind, and have
the encryption method alter itself such that each 'key' or
'password' produces a unique encrypted output, one that also
requires a unique 'key' or 'password' to decrypt. This can either
be a symmetric or asymmetric key. The popular 'PGP' public
key encryption, and the 'RSA' encryption that it's based on,
uses an 'asymmetrical' key, allowing you to share the 'public'
encryption key with everyone, while keeping the 'private'
8. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 36
decryption key safe. The encryption key is significantly
different from the decryption key, such that attempting to
derive the private key from the public key involves too many
hours of computing time to be practical. It would NOT be
impossible, just highly unlikely, which is 'pretty good'. There
are few operations in mathematics that are truly 'irreversible'.
In nearly all cases, the commutative property or an 'inverse'
operation applies. if an operation is performed on 'a', resulting
in 'b', you can perform an equivalent operation on 'b' to get 'a'.
In some cases you may get the absolute value (such as a
square root), or the operation may be undefined (such as
dividing by zero). However, it may be possible to base an
encryption key on an algorithm such that you cannot perform
a direct calculation to get the decryption key. An operation
that would cause a division by zero would PREVENT a public
key from being directly translated into a private key. As such,
only 'trial and error' (otherwise known as a 'brute force' attack)
would remain as a valid 'key cracking' method, and it would
therefore require a significant amount of processing time to
create the private key from the public key.
In the case of the RSA encryption algorithm, it uses very large
prime numbers to generate the public key and the private key.
Although it would be possible to factor out the public key to
get the private key (a trivial matter once the 2 prime factors
are known), the numbers are so large as to make it very
impractical to do so. The encryption algorithm itself is ALSO
very slow, which makes it impractical to use RSA to encrypt
large data sets. So PGP (and other RSA-based encryption
schemes) encrypt a symmetrical key using the public key, then
encrypt the remainder of the data with a faster algorithm using
the symmetrical key. The symmetrical itself key is randomly
generated, so that the only (theoretical) way to get it would be
by using the private key to decrypt the RSA-encrypted
symmetrical key.
Example: Suppose you want to encrypt data (let's say this
web page) with a key of 12345. Using your public key, you
RSA-encrypt the 12345, and put that at the front of the data
stream (possibly followed by a marker or preceded by a data
length to distinguish it from the rest of the data). THEN, you
follow the 'encrypted key' data with the encrypted web page
text, encrypted using your favorite method and the key
'12345'. Upon receipt, the decrypt program looks for (and
finds) the encrypted key, uses the 'private key' to decrypt it,
and gets back the '12345'. It then locates the beginning of the
encrypted data stream, and applies the key '12345' to decrypt
the data. The result: a very well protected data stream that is
reliably and efficiently encrypted, transmitted, and decrypted.
4. LABVIEW INTRODUCTION
Lab VIEW (Laboratory Virtual Instrumentation Engineering
Workbench) is a platform and development environment for a
visual programming language from National Instruments. The
graphical language is named "G". Lab VIEW is commonly
used for data acquisition, instrument control, and industrial
automation on a variety of platforms including Microsoft
Windows, various flavors of UNIX, Linux, and Mac OS. The
latest version of Lab VIEW is version 8.6.1, released in
February of 2009.
Lab VIEW is a program development application, much like C
or FORTRAN. Lab VIEW is, however, different from those
applications in one important aspect. Other programming
systems use text-based languages to create lines of code, while
Lab VIEW uses a graphical programming language, G, to
create programs in block diagram form.
Lab VIEW, like C or FORTRAN, is a general-purpose
programming system with extensive libraries of functions for
many programming tasks. Lab VIEW includes libraries for
data acquisition, data analysis, data presentation, and data
storage. A Lab VIEW program is called a virtual instrument
(VI) because it‟s appearance and operation can imitate an
actual instrument.
It is specifically designed to take measurements, analyze data
and present results to the user. It has such a versatile graphical
user interface and is easy to program with, it is also ideal for
simulations, presentation of ideas and general programming.
Academic campuses worldwide use it to deliver project-based
learning. Lab VIEW offers unrivaled integration with
thousands of hardware devices and provides hundreds of built-
in libraries for advanced analysis and data visualization. With
the intuitive nature of the graphical programming
environment, we can:
Visualize and explore theoretical concepts through
interactive simulations and real-world signals.
Design projects in applications such as measurement,
control, embedded, signal processing, and
communication.
Compute, simulate, and devise solutions to homework
problems.
5. WORKING
The basic operation of the project designed can be illustrated
as a process of encryption in encryptor module, transmitted
through a communication medium to the decryptor module
and the retrieval of the original text file at the output of
decryptor module. The type of encryption method
implemented in this application is a mixed type of Symmetric
and Asymmetric Encryption. The various concepts holding as
keys in the Encryption procedure one of them being the
scrambling pattern needs to be implemented exactly in
reversal order to decrypt the data which serves the purpose of
symmetric method and availability of the users to login with
different ids at encryptor and decryptor module serves the
asymmetric method. Hence the encryption method
implemented is of both the types. As described above the
operation of the application is among the two vital modules
9. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 37
.The following diagram gives the illustration. The encryption
and decryption process takes place in two modules of the
project designed. The two modules in the project designed are:
1. Encryptor module
2. Decryptor module
5.1 Encryptor Module
The Encryptor module is the vital part of the application
where the input text file is received for encryption. The
Encryptor module designed for the application avail the
cryptographer with user friendly and security options. The
user friendly options provided by the programmer include the
managing of the appropriate visibility of the login id options.
The encryptor model provides the facility of creating a
username and password in particular at the encryptor to
proceed with the process of encryption. The Username and
Password provides the security from any other person other
than the assigned cryptographer to use the application.
The various user friendly options provided in using the login
id mainly concerned to its appropriate timely visibility and
ineligible logins using default null usernames and passwords.
A sequence of coding has been done for proper operation of
the username and password. Initially, the input for the
username and the password and maintained at the null string
or the empty space where the user could enter his username
and password and the input text box, which shows the text
present in the input text file which needs to be encrypted, is
maintained empty and visible for the user. The usernames and
passwords of eligible users registered are arranged as an array
of usernames and passwords. Hence a case always arises
where the zero index of the array of strings is an empty string
and an unassigned user can login with empty strings and such
a possibility is avoided with proper coding. After entering the
username and password, an OK button is provided whose
operation would decide the further proceedings into the
application. When OK button is pressed or its state is changed
after entering the login id details it further continues its
operation of defining the user whether a valid one or an
invalid one. Until the OK button is used even though after
entering the details there would be no further action. Once the
OK button is used the step executes, where it is checked
whether the username matches with password of the
concerned user. If an ineligible detail is entered the user will
be provided his state as invalid user and once both the details
match with database the application leads the user to further
operation where the user is directed for the procedure of
encryption. On the display the once a valid user enters the
username, password and OK button disappear to avoid any
confusion. Once the user login is done and procedure is
completed a dialog showing encryption successful is shown
and directed to the empty username and password blocks with
OK button.
5.2 Flow Diagram of Encryption
The flow diagram of the encryption process in the encryptor
module is:
Figure - 5.1: Flow diagram of the encryption process
The above shows the flow diagram of the encryption process.
Initially, the user needs to use the valid login id details to
begin the application .In case of unknown details particular
IFFT
(Inverse Fast Fourier Transform)
FFT
(Fast Fourier Transform)
Descrambling technique
Parallel To serial
Serial to parallel
Decrypted file
Extraction of characters from the
input file
Input of the path of the encrypted files
received from the encryptor
USERNAME & PASSWORD
10. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 38
user will be regarded an invalid user. Once, the user has
logged in the application shows a browsing window to select a
particular text file which needs to be encrypted and secured
for further purpose. The user can browse for the path of the
file and select it. Later, the application requests the user to
create two empty files and provide its path in particular
browsing windows to store the encrypted data after the
encryption of the input text file. The characters from the input
text file are extracted using the “Read from text file” function
to encrypt. After the extraction of the characters the “string to
array” function is used to assign every available alphabets and
characters their respective ASCII codes.
5.3. ASCII Codes
The American Standard Code for Information Interchange is a
character-encoding scheme originally based on the English
alphabet. ASCII codes represent text in computers,
communications equipment, and other devices that use text.
Most modern character-encoding schemes are based on
ASCII, though they support many additional characters.
ASCII developed from telegraphic codes. Its first commercial
use was as a seven-bit teleprinter code promoted by Bell data
services. Work on the ASCII standard began on October 6,
1960, with the first meeting of the American Standards
Association's (ASA) X3.2 subcommittee. The first edition of
the standard was published during 1963, a major revision
during 1967, and the most recent update during 1986.
Compared to earlier telegraph codes, the proposed Bell code
and ASCII were both ordered for more convenient sorting
(i.e., alphabetization) of lists and added features for devices
other than teleprinters.
After the conversion of characters into ASCII codes .the series
of data available needs to be converted into a parallel blocks
OK data, each block containing 8 bits of data as the Fast
Fourier Transform used for domain conversion is an algorithm
implemented for input of 8 bits. For such a specific serial-to-
parallel conversion a serial to parallel converter is designed as
,All the data available is placed in a “for Loop” whose value
of N fixed to the value of the total number of elements present
in the input file divided by eight so that those many number of
blocks of 8 bits are formed. The number elements present in
the input file is known using “string length” function and a
“divide” with constant value eight to give the value to N. Later
the array function “Array Subset” is used to form the arrays of
8 bits and input for length parameter in this function is given
by the iteration value and a “multiply” function with constant
eight. Then an “Array to Cluster” is used to give input to the
FFT algorithm which converts domain from time domain to
frequency domain. The outputs of the FFT conversion are
scrambled using scrambling technique and the scrambled
values undergo the Inverse Fast Fourier Transform to give
encrypted time domain values .But, the outputs obtained are in
complex form and thus when they are directly given to the
decryptor section the application would round the complex
value of to its real value leaving imaginary values. Hence real
and imaginary values are obtained from the output complex
values and all the available real values are stored in one file
and all the imaginary values are stored in another file so, that
no information is lost.
Table -1.1: Chart for ASCII codes
11. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 39
5.4. Labview Implementation of Encryptor
Front Panel of Encryptor
The Front panel of the Encryptor module shows the various
security options of login id and the input text space where the
input texts file data. The empty space in the input text option
shown displays the data of the text file which is encrypted for
security purpose. During the execution of the code after
entering login details a request for input text file and empty
files to store the encrypted data. The coding of the encryptor
module is done in such a user friendly manner with
appropriate visibility of display options and proper message
inputs to the user.
Block Diagram of Encryptor
The block diagram of the encryptor module shows the
sequence of coding occurring during the operation of the
application. The following block diagram shown in the figure
includes the extraction of characters from the input text file,
Conversion of obtained data from serial to parallel blocks of
data, scrambling block and saving of the encrypted data in two
files. One of the file possessing the real part of the output
values and the other file possessing the imaginary parts of the
outputs. A complex to polar conversion function is used to
obtain the particular data in two files. In the figure encrypted
part1 and encrypted part2 in the last two sequences shows the
outputs of the Encryptor module in real and imaginary parts.
Figure -5.2: :Front Panel of the Encryptor module
Figure 5.3:Block Diagram of the Encryptor module
12. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 40
As mentioned earlier in the introduction to the lab VIEW
usage, the programming of any code would clear and
illustrated for the other users using it when the code broken
into individual codes based on their function and designed as
individual VI for each broken code .the individual code
designed is used as subVIs in the other VI which would be
easy for understanding and illustration. In Encryptor module
certain functions are broken and VI is created. These VI‟s
created are added to actual VI. The subVIs added to the actual
Encryptor module VI are:
1. Design for creation of Username and Password.
2. Design for scrambling pattern with FFT and IFFT subVIs.
3. Design for 8-bit FFT algorithm
4. Design for 8-bit IFFT algorithm
5.4.1) Block Diagram for Designing Username And
Password In Encryptor Module:
The following figure gives the designing of username and
password created for an Encryptor module. The Username and
password functions are designed possessing various user
friendly options. The various user friendly options provided in
using the login id mainly concerned to its appropriate timely
visibility and ineligible logins using default null usernames
and passwords. A sequence of coding has been done for
proper operation of the username and password.
Initially, the input for the username and the password and
maintained at the null string or the empty space where the user
could enter his username and password and the input text box,
which shows the text present in the input text file which needs
to be encrypted, is maintained empty and visible for the user.
The usernames and passwords of eligible users registered are
arranged as an array of usernames and passwords. Hence a
case always arises where the zero index of the array of strings
is an empty string and an unassigned user can login with
empty strings and such a possibility is avoided with proper
coding. After entering the username and password, an OK
button is provided whose operation would decide the further
proceedings into the application. When OK button is pressed
or its state is changed after entering the login id details it
further continues its operation of defining the user whether a
valid one or an invalid one. Until the OK button is used even
though after entering the details there would be no further
action. Once the OK button is used the step executes, where it
is checked whether the username matches with password of In
programming an efficient and secure Login operation various
functions are implemented in sequence of steps. Initially, to
display the username and password empty spaces “local
variable” for each of the username, password and input
text/output text space are created and an empty string is given
as input so that they display nothing. To enable the visibility
of the Username, Password and OK button their “visible”
options are created and set to “true” and the visible option of
text space is disappeared by setting it to “false” until the login
details are provided. In case a user tries to enter an empty
string as username and password no action takes place. This is
implemented by using a “while Loop” and an output of
“invalid user” pops out. For the convenience an “OK button”
is created which needs to be used after entering the login
details. In case of no operation of OK button there would be
no action as output. After entry of the username and password
the details are verified with the database created using “search
1D Array”, input string constants and “Index Array Function”.
When the details match a message “Valid User” is given and
the login options disappear, this done when visible of these
options is given “false”. Later the text space appears when a
“true” is given to its visible option. After the Encryption
process the “Encryption Successful” message is given.
5.4.2: Block Diagram for Designing Scrambling
Pattern
The scrambling pattern is one of the important factor for
security of data during transmission. The scrambling pattern
created by the cryptographer acts as the key which stores data.
The method of implementing scrambling pattern is changing
the order of the variables in a desired fashion, different from
the original order. By changing the position of the occurrence
of the variables is changed then they are said to be scrambled.
Thus in block diagram implementation the scrambling pattern
is designed using the “Unbundle” function.
Figure -5.4: Design for scrambling pattern with FFT and
IFFT subVIs
5.4.3 Block Diagram for Designing A 8-Bit Fft
Algorithm
Therefore in Encryption process, the different type of values
obtained from the character to ASCII code conversion and
serial to parallel conversion are considered as a “Cluster”. A
cluster is set of values of different types similar to arrays.
These clusters formed are of 8-bits. Thus for domain
conversion an 8-bit FFT algorithm needs to be implemented to
convert the data from time domain to frequency domain. the 8-
bit FFT butterfly diagram is implemented using Lab VIEW
functions. Therefore “Bundle”, “Unbundle”,”Multiply”,
“Add” and “Subtract” functions are used. The following figure
gives the block diagram of 8-bit FFT algorithm.
13. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 41
Figure 5.5: Design for 8-bit FFT algorithm
5.4.4 Block Diagram for Designing 8-Bit Ifft
Algorithm:
In encryption, after the scrambling the data needs to undergo
domain conversion to its actual Time domain. Thus, an 8-bit
IFFT is used to convert the frequency domain data to time
domain data. The following figure gives the implementation of
an 8-bit IFFT.
Figure 5.6: Design for 8-bit IFFT algorithm
5.5.1 Decryptor Module:
The Decryptor module is the other vital part of the application
where the encrypted text files are received from encryptor.
The Decryptor module designed for the application avail the
cryptographer with user friendly and security options. The
user friendly options provided by the programmer include the
managing of the appropriate visibility of the login id options.
The Decryptor model provides the facility of creating a
username and password in particular at the Decryptor similar
to that of encryptor. The Username and Password provides the
security from any other person other than the assigned
cryptographer to use the application. The various user friendly
options are provided in decryptor as similar to that of
encryptor.
There is a possibility that a user different from that at the
encryptor can login with his/her details present in data base to
conduct the process of decryption. Thus different but eligible
users can use the application at the encryptor and decryptor
ends. This adds the feature of Asymmetric encryption to the
application.
The following figure gives the flow chart of the decryption
process .Initially, once the login details are verified the request
14. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 42
for the encrypted files from the encryptor displays and after
the paths are specified the real and imaginary data from the
respective files is extracted. The real and imaginary data
obtained is formed into a complex data which could be used
for further process. The complex data obtained is let to
undergo a domain conversion from time domain to frequency
domain using an 8-bit FFT algorithm. The algorithm gives an
array of blocks of 8-bit data. Elements in each block undergo
scrambling similar to that of the encryptor. Thus the
knowledge pertaining to the scrambling pattern should be
possessed by respective cryptographers. Thus the application
designed is a type of symmetric encryption along with the
asymmetric encryption. After the scrambling the data
undergoes a domain conversion from frequency domain to
time domain using a IFFT algorithm.
Thus an 8-bit IFFT is implemented similar to that of
encryptor. The IFFT gives parallel data of complex values
containing only real values which are approximately equal to
the ASCII values of the respective text characters. This
parallel data is converted to serial data using a parallel to serial
converter. In the implementation the parallel to serial
converter implemented by using a concatenation function. In
coding, the values are converted to charades using the
“unsigned array to string” function and the shift registers to
store the previous strings and “concatenate strings” for
concatenation of to strings to for the whole paragraph of the
text file. At the end a “Write to text file” function is used to
write the characters into a file.
5.5.2. Labview Implementation of Decryptor Module:
Front Panel of Decryptor Module:
The Front panel of the Decryptor module shows the various
security options of login id and the input text space where the
input text file data. The empty space in the output text option
shown displays the data of the text file which is encrypted for
security purpose. During the execution of the code after
entering login details a request for encrypted text files and an
empty file to store the retrieved data. The coding of the
Decryptor module is done in such a user friendly manner with
appropriate visibility of display options and proper message
inputs to the user.
Block Diagram of Decryptor Module:
The block diagram of the decryptor module shows the
sequence of coding occurring during the operation of the
application. The following block diagram shown in the figure
includes the extraction of characters from the input encrypted
text files, scrambling block and concatenation of the parallel
strings obtained. After the process of concatenation the strings
written into the empty file which was created in beginning of
the application
Figure 5.7: Design for 8-bit IFFT algorithm
IFFT(Inverse Fast Fourier
Transform)
FFT(Fast Fourier
Transform)
Descrambling technique
Parallel To serial
Serial to parallel
Decrypted file
Extraction of characters from
the input file
Input of the path of the encrypted
files received from the encryptor
USERNAME & PASSWORD
15. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 43
Figure -5.8: Front Panel for Decryptor Module
Figure -5.9.: Block Diagram for Decryptor Module
16. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 44
RESULTS:
a)ENCRYPTION OUTPUT:
Figure -6.1: Block Diagram for Decryptor Module
b)DECRYPTION OUTPUT:
Figure -6.2: Block Diagram for Decryptor Module
17. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 01 Issue: 01 | Sep-2012, Available @ http://www.ijret.org 45
FUTURE ASPECTS:
The application possesses immense scope for further
development. The development relies on the factors providing
security for the transmission. The application can be further
designed for word documents and other types of text files (pdf,
word document, etc).the security aspect provided by the
scrambling pattern can be enhanced by designing a higher bit
Fast Fourier Transform (FFT) than 8-bit which needs a higher
bit scrambling pattern. At present there is possibility of 8! i.e.
40320 scrambling patterns for example if a 16-bit scrambling
pattern is developed 20,922,789,888,000 which make it highly
not possible to crack. for using a 16-bit scrambling pattern a
16 bit serial to parallel conversion should be developed. In the
present application the encrypted data is split into two files for
real and imaginary values thus the data can be split into more
than two files this feature adds to the security i.e. in case a
hacker is in possession of one of the file cannot crack the
original file thus when number of split files increase the
security. The proposed Encryption technique can be
implemented for audio, image etc inputs.
CONCLUSIONS
The application designed provides a very high security in
transmission of a text file. This application can be used as an
induced encryptor in programming scenarios to secure vital
codes and secure transmission in research departments. the
high level of security is provided by the scrambling patterns,
user logins, domain conversion etc. the future scope of this
application relies the development of higher order FFT
algorithms which proportionally increases the possible number
of scrambling patterns for higher security.
BIOGRAPHIES:
[1] http://www.honeypage.com/SPEECH_ENCRYPTION_
AND_DECRYPTION_USING_DSP_PROCESSOR.ht
mll
[2] Fast Fourier transform based speech encryption System
- S. Sridharan, E. Dawson, Goldburg .IEEE
PROCEEDINGS-I, Vol. 138, No. 3, JUNE 1991
[3] „LabVIEW for Everyone‟-Jeffrey Travis, Jim Kring
[4] Encryption using Fast Fourier Transform Techniques –
S. Sridharan, E. Dawson & J. O‟ Sullivan
[5] LabVIEW for Digital Signal processing and digital
communication – Cory L. Clark
[6] Digital Signal Processing – John G. Prokais
[7] “Cryptanalysis of frequency domain” - B. Goldburg, S.
Sridharan and E, Dawson,
Volume:140,Issue:4,Publication Year: 1993 , Page(s):
235 - 239
[8] “The handbook of real-time Fourier transforms,” - W.
Smith and J. Smith,.
[9] “Fourier transforms: An introduction for engineers,”-
R.M. Gray and J.W. Goodman