Although DNA computing has emerged as a new computing paradigm with its massive parallel computing capabilities, the large number of DNA required for larger size of computational problems still remain as a stumbling block to its development as practical computing. In this paper, we propose a modification to implement a physical experimentation of two Boolean matrices multiplication problem with DNA computing. The Truncated Matrices reduces the number of DNA sequences and lengths utilized to compute the problem with DNA computing.
Although DNA computing has emerged as a new computing paradigm with its massive parallel computing
capabilities, the large number of DNA required for larger size of computational problems still remain as a
stumbling block to its development as practical computing. In this paper, we propose a modification to
implement a physical experimentation of two Boolean matrices multiplication problem with DNA
computing. The Truncated Matrices reduces the number of DNA sequences and lengths utilized to compute
the problem with DNA computing.
This document discusses using a particle swarm optimization algorithm to solve the K-node set reliability optimization problem for distributed computing systems. The K-node set reliability optimization problem aims to maximize the reliability of a subset of k nodes in a distributed computing system while satisfying a specified capacity constraint. It presents the problem formulation and describes particle swarm optimization, a metaheuristic optimization technique inspired by swarm intelligence. The proposed algorithm applies a discrete particle swarm optimization approach to solve the K-node set reliability optimization problem, which is demonstrated on an example distributed computing system topology.
ON THE DUALITY FEATURE OF P-CLASS PROBLEMS AND NP COMPLETE PROBLEMScscpconf
In term of computational complexity, P-class (abbreviated as P) problems are polynomial-time
solvable by deterministic Turing machine while NP complete (abbreviated as NPC) problems
are polynomial-time solvable by nondeterministic Turing machine. P and NPC problems are
regularly treated in different classes. Determining whether or not it is possible to solve NPC
problems quickly, is one of the principal unsolved problems in computer science today. In this
paper, a new perspective is provided: both P problems and NPC problems have the duality
feature in terms of computational complexity of asymptotic efficiency of algorithms.
Network Coding for Distributed Storage Systems(Group Meeting Talk)Jayant Apte, PhD
Reviews work of Koetter et al. and Dimakis et al.
The former provides an algebraic framework for linear network coding. The latter reduces the so called repair problem to single-source multicast network-coding problem and shows that there is a tradeoff between amount of data stored in a distributed sturage system and amount of data transfer required to repair the system if a node(hard-drive) fails.
Hamming net based Low Complexity Successive Cancellation Polar DecoderRSIS International
This paper aims to implement hybrid based Polar
encoder using the knowledge of mutual information and channel
capacity. Further a Hamming weight successive cancellation
decoder is simulated with QPSK modulation technique in
presence of additive white gaussian noise. The experimentation
performed with the effect of channel polarization has shown that
for 256- bit data stream, 30% channels has zero bit and 49%
channels are with a one bit capacity. The decoding complexity is
reduced to almost half as compared to conventional successive
cancellation decoding algorithm. However, the required SNR of
7 dB is achieved at the targeted BER of 10 -4. The penalty paid is
in terms of training time required at the decoding end.
Reliability Improvement in Logic Circuit Stochastic ComputationWaqas Tariq
Defects and faults arise from physical imperfections and noise susceptibility of the analog circuit components used to create digital circuits resulting in computational errors. A probabilistic computational model is needed to quantify and analyze the effect of noisy signals on computational accuracy in digital circuits. This model computes the reliability of digital circuits meaning that the inputs and outputs and their implemented logic function need to be calculated probabilistically. The purpose of this paper is to present a new architecture for designing noise-tolerant digital circuits. The approach we propose is to use a class of single-input, single-output circuits called Reliability Enhancement Network Chain (RENC). A RENC is a concatenation of n simple logic circuits called Reliability Enhancement Network (REN). Each REN can increase the reliability of a digital circuit to a higher level. Reliability of the circuit can approach any desirable level when a RENC composed of a sufficient number of RENs is employed. Moreover, the proposed approach is applicable to the design of any logic circuit implemented with any logic technology.
Automatic DNA Sequence Generation for Secured Effective Multi -Cloud StorageIOSR Journals
Abstract: The main target of this paper is to propose an algorithm to implement data hiding in DNA sequences
to increase the confidentiality and complexity by using software point of view in cloud computing environments.
By utilizing some interesting features of DNA sequences, the implementation of a data hiding is applied in
cloud. The algorithm which has been proposed here is based on binary coding and complementary pair rules.
Therefore, DNA reference sequence is chosen and a secret data M is hidden into it as well. As result of applying
some steps, M´´´ is come out to upload to cloud environments. The process of identifying and extracting the
original data M, hidden in DNA reference sequence begins once clients decide to use data. Furthermore,
security issues are demonstrated to inspect the complexity of the algorithm. In addition, providing better privacy
as well as ensure data availability, can be achieved by dividing the user’s data block into data pieces and
distributing them among the available SPs in such a way that no less than a threshold number of SPs can take
part in successful retrieval of the whole data block. In this paper, we propose a secured cost-effective multicloud
storage (SCMCS) model in cloud computing which holds an economical distribution of data among the
available SPs in the market, to provide customers with data availability as well as secure storage.
Keywords: DNA sequence; DNA base pairing rules; complementary rules; DNA binary coding; cloud service
provider.
This document provides information about joining an All India Mock GATE Classroom Test Series conducted by GATE Forum in over 25 cities across India. The test series includes section tests and full tests designed by IISc alumni according to the latest GATE syllabus. Participants receive their percentile score, All India rank, and can interact with IISc alumni through online discussion forums. The document also provides sample questions from previous GATE papers.
Although DNA computing has emerged as a new computing paradigm with its massive parallel computing
capabilities, the large number of DNA required for larger size of computational problems still remain as a
stumbling block to its development as practical computing. In this paper, we propose a modification to
implement a physical experimentation of two Boolean matrices multiplication problem with DNA
computing. The Truncated Matrices reduces the number of DNA sequences and lengths utilized to compute
the problem with DNA computing.
This document discusses using a particle swarm optimization algorithm to solve the K-node set reliability optimization problem for distributed computing systems. The K-node set reliability optimization problem aims to maximize the reliability of a subset of k nodes in a distributed computing system while satisfying a specified capacity constraint. It presents the problem formulation and describes particle swarm optimization, a metaheuristic optimization technique inspired by swarm intelligence. The proposed algorithm applies a discrete particle swarm optimization approach to solve the K-node set reliability optimization problem, which is demonstrated on an example distributed computing system topology.
ON THE DUALITY FEATURE OF P-CLASS PROBLEMS AND NP COMPLETE PROBLEMScscpconf
In term of computational complexity, P-class (abbreviated as P) problems are polynomial-time
solvable by deterministic Turing machine while NP complete (abbreviated as NPC) problems
are polynomial-time solvable by nondeterministic Turing machine. P and NPC problems are
regularly treated in different classes. Determining whether or not it is possible to solve NPC
problems quickly, is one of the principal unsolved problems in computer science today. In this
paper, a new perspective is provided: both P problems and NPC problems have the duality
feature in terms of computational complexity of asymptotic efficiency of algorithms.
Network Coding for Distributed Storage Systems(Group Meeting Talk)Jayant Apte, PhD
Reviews work of Koetter et al. and Dimakis et al.
The former provides an algebraic framework for linear network coding. The latter reduces the so called repair problem to single-source multicast network-coding problem and shows that there is a tradeoff between amount of data stored in a distributed sturage system and amount of data transfer required to repair the system if a node(hard-drive) fails.
Hamming net based Low Complexity Successive Cancellation Polar DecoderRSIS International
This paper aims to implement hybrid based Polar
encoder using the knowledge of mutual information and channel
capacity. Further a Hamming weight successive cancellation
decoder is simulated with QPSK modulation technique in
presence of additive white gaussian noise. The experimentation
performed with the effect of channel polarization has shown that
for 256- bit data stream, 30% channels has zero bit and 49%
channels are with a one bit capacity. The decoding complexity is
reduced to almost half as compared to conventional successive
cancellation decoding algorithm. However, the required SNR of
7 dB is achieved at the targeted BER of 10 -4. The penalty paid is
in terms of training time required at the decoding end.
Reliability Improvement in Logic Circuit Stochastic ComputationWaqas Tariq
Defects and faults arise from physical imperfections and noise susceptibility of the analog circuit components used to create digital circuits resulting in computational errors. A probabilistic computational model is needed to quantify and analyze the effect of noisy signals on computational accuracy in digital circuits. This model computes the reliability of digital circuits meaning that the inputs and outputs and their implemented logic function need to be calculated probabilistically. The purpose of this paper is to present a new architecture for designing noise-tolerant digital circuits. The approach we propose is to use a class of single-input, single-output circuits called Reliability Enhancement Network Chain (RENC). A RENC is a concatenation of n simple logic circuits called Reliability Enhancement Network (REN). Each REN can increase the reliability of a digital circuit to a higher level. Reliability of the circuit can approach any desirable level when a RENC composed of a sufficient number of RENs is employed. Moreover, the proposed approach is applicable to the design of any logic circuit implemented with any logic technology.
Automatic DNA Sequence Generation for Secured Effective Multi -Cloud StorageIOSR Journals
Abstract: The main target of this paper is to propose an algorithm to implement data hiding in DNA sequences
to increase the confidentiality and complexity by using software point of view in cloud computing environments.
By utilizing some interesting features of DNA sequences, the implementation of a data hiding is applied in
cloud. The algorithm which has been proposed here is based on binary coding and complementary pair rules.
Therefore, DNA reference sequence is chosen and a secret data M is hidden into it as well. As result of applying
some steps, M´´´ is come out to upload to cloud environments. The process of identifying and extracting the
original data M, hidden in DNA reference sequence begins once clients decide to use data. Furthermore,
security issues are demonstrated to inspect the complexity of the algorithm. In addition, providing better privacy
as well as ensure data availability, can be achieved by dividing the user’s data block into data pieces and
distributing them among the available SPs in such a way that no less than a threshold number of SPs can take
part in successful retrieval of the whole data block. In this paper, we propose a secured cost-effective multicloud
storage (SCMCS) model in cloud computing which holds an economical distribution of data among the
available SPs in the market, to provide customers with data availability as well as secure storage.
Keywords: DNA sequence; DNA base pairing rules; complementary rules; DNA binary coding; cloud service
provider.
This document provides information about joining an All India Mock GATE Classroom Test Series conducted by GATE Forum in over 25 cities across India. The test series includes section tests and full tests designed by IISc alumni according to the latest GATE syllabus. Participants receive their percentile score, All India rank, and can interact with IISc alumni through online discussion forums. The document also provides sample questions from previous GATE papers.
A Modified Dna Computing Approach To Tackle The Exponential Solution Space Of...ijfcstjournal
This document summarizes a research paper that proposes a modified DNA computing approach to solve the graph coloring problem. The approach colors the vertices of a graph one by one, expanding DNA strands encoding promising solutions at each step and discarding infeasible solutions. This avoids an exponential number of DNA strands by not expanding the entire solution space at once. The approach is simulated on a sample 12-vertex graph, requiring fewer DNA strands than typical approaches. The modified approach provides an improvement over previous DNA computing methods for graph coloring by reducing the solution space in a step-wise manner.
A Design and Solving LPP Method for Binary Linear Programming Problem Using D...ijistjournal
This document describes a method for solving binary linear programming problems using DNA computing. It involves representing the variables and constraints of the problem as DNA strands. All possible combinations of 0s and 1s for the variables are generated as DNA sequences. Constraints are applied by adding complementary strands tagged with fluorescent markers. Strands that satisfy the constraints will hybridize, revealing feasible solutions. The value of the objective function is then calculated for each feasible solution to determine the optimal one. A hypothetical example problem is worked through step-by-step to demonstrate the method.
A Design and Solving LPP Method for Binary Linear Programming Problem Using D...ijistjournal
Molecular computing is a discipline that aims at harnessing individual molecules for computational purposes. This paper presents the applied Mathematical sciences using DNA molecules. The Major achievements are outlined the potential advances and the challenges for the practitioners in the foreseeable future. The Binary Optimization in Linear Programming is an intensive research area in the field of DNA Computing. This paper presents a research on design and implementation method to solve an Binary Linear Programming Problem using DNA computing. The DNA sequences of length directly represent all possible combinations in different boxes. An Hybridization is performed to form double strand molecules according to its length to visualize the optimal solution based on fluorescent material . Here Maximization Problem is converted into DNA computable form and a complementary are found to solve the problem and the optimal solution is suggested as per the constraints stipulated by the problem.
K-mer frequency statistics of biological sequences is a very important and important problem in biological information processing. This paper addresses the problem of index k-mer for large scale data reading DNA sequences in a limited memory space and time. Using the hash algorithm to establish index, the index model is set up to base pairing, and get the length of k-mer statistic information quickly, so as to avoid
searching all the sequences of the index. At the same time, the program uses hash table to establish index and build search model, and uses the zipper method to resolve the conflict in the case of address conflict.Algorithm of time complexity analysis and experimental results show that compared with the traditional indexing methods, the algorithm of the performance improvement is obvious, and very suitable for to be used in the k-mer length change with a wide range .
A Bibliography on the Numerical Solution of Delay Differential Equations.pdfJackie Gold
This document is a bibliography containing references to papers and technical reports on numerical methods for solving delay differential equations. It includes 58 references divided into sections on dense-output methods for ordinary differential equations, numerical methods for delay differential equations, dynamics and stability of delay differential equations, applications of delay differential equations, and functional differential equations. The bibliography aims to provide an introduction to earlier works in the field as well as more recent publications that are available through online search tools.
A Study on DNA based Computation and Memory DevicesEditor IJCATR
The present study delineates Deoxyribonucleic Acid (DNA) based computing and storage devices which have good future in the vast era of information technology. The traditional devices mostly used are made up of silicon. The devices are costly and have physical limitations to cause leakage of electrons and circuit to shorten. So, there is a need of materials which are capable of doing fast processing and have vast memory storage. DNA which is a bio-molecule has all these characteristics capable of providing ample storage. In classical computing devices, electronic logic gates are elements which allow storing and transforming of information. Designing of an appropriate sequence or a net of “store” and “transform” operations (in a sense of building a device or writing a program) is equivalent to preparing some computations. In DNA based computation, the situation is analogous. The main difference is the type of computing devices since in this new method of computing instead of electronic gates, DNA molecules have been deployed for the processing of dossier. Moreover, the inherent massive parallelism of DNA computing may lead to methods solving some intractable computational problems. The aim of this research study is to analyze the logical features and memory formation using DNA bio molecules in order to achieve proliferated speed, accuracy and vast storage.
DCE: A NOVEL DELAY CORRELATION MEASUREMENT FOR TOMOGRAPHY WITH PASSIVE REAL...ijdpsjournal
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
A simple and fast exact clustering algorithm defined for complexAlexander Decker
This document proposes a new clustering method for complex networks based on prime numbers. The method defines clusters of nodes that have the same number of input/output connections and paths of equal length. It encodes the complete paths of each node with prime numbers to calculate a unique CPS-code for that node. Nodes with the same CPS-code are grouped into the same cluster. The algorithm was tested on networks with up to 500 nodes and showed fast performance, analyzing networks in seconds or minutes. The simple method allows classification of nodes in complex networks for applications across different scientific fields.
A simple and fast exact clustering algorithm defined for complexAlexander Decker
This document proposes a new clustering method for complex networks based on prime numbers. It defines nodes clusters as groups of nodes with the same complete path sets (CPS), where a complete path is a path from a node that includes each other node only once. Each complete path is assigned a prime number as a code based on its length. A node's CPS-code is the product of the codes for all its complete paths. Nodes are clustered together if they have the same or similar CPS-codes. The method was tested on a network with 500 nodes and over 100,000 connections, which was clustered in 80 seconds.
An Optimized Parallel Algorithm for Longest Common Subsequence Using Openmp –...IRJET Journal
This document summarizes research on developing parallel algorithms to optimize solving the longest common subsequence (LCS) problem. LCS is commonly used for sequence comparison in bioinformatics. Traditional sequential dynamic programming algorithms have complexity of O(mn) for sequences of lengths m and n. The document reviews parallel algorithms developed using tools like OpenMP and GPUs like CUDA to reduce computation time. It proposes the authors' own optimized parallel algorithm for multi-core CPUs using OpenMP.
Solving N-queen Problem Using Genetic Algorithm by Advance Mutation Operator IJECEIAES
N-queen problem represents a class of constraint problems. It belongs to set of NP-Hard problems. It is applicable in many areas of science and engineering. In this paper N-queen problem is solved using genetic algorithm. A new genetic algoerithm is proposed which uses greedy mutation operator. This new mutation operator solves the N-queen problem very quickly. The proposed algorithm is applied on some instances of N-queen problem and results outperforms the previous findings.
One of the important steps in routing is to find a feasible path based on the state information. In order to support real-time multimedia applications, the feasible path that satisfies one or more constraints has to be computed within a very short time. Therefore, the paper presents a genetic algorithm to solve the paths tree problem subject to cost constraints. The objective of the algorithm is to find the set of edges connecting all nodes such that the sum of the edge costs from the source (root) to each node is minimized. I.e. the path from the root to each node must be a minimum cost path connecting them. The algorithm has been applied on two sample networks, the first network with eight nodes, and the last one with eleven nodes to illustrate its efficiency.
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...IOSRJVSP
This document presents a neural network approach to channel equalization using a multilayer perceptron with a variable learning rate parameter. Specifically, it proposes modifying the backpropagation algorithm to allow the learning rate to adapt at each iteration in order to achieve faster convergence. The equalizer structure is a decision feedback equalizer modeled as a neural network with an input, hidden and output layer. Simulation results show the proposed variable learning rate approach improves bit error rate and convergence speed compared to a standard backpropagation algorithm.
This summary provides the key points about the document in 3 sentences:
The document presents a method for obtaining the exact probability of error for block codes using soft-decision decoding and the eigenstructure of the code correlation matrix. It shows that under a unitary transformation, the performance evaluation of a block code becomes a one-dimensional problem involving only the dominant eigenvalue and its corresponding eigenvector. Simulation results demonstrate good agreement with the analysis, validating the method for computing the bit error rate of block codes based on the properties of the code correlation matrix.
Application of thermal error in machine tools based on Dynamic Bayesian NetworkIJRES Journal
In recent years, the growing interest toward complex manufacturing on machine tools and the
machining accuracy have solicited new efforts in the area of modeling and analysis of machine tools machining
errors. Therefore, the mathematical model study on the relationship between temperature field and thermal error
is the core content, which can improve the precision of parts processing and the thermal stability, also predict
and compensate machining errors of CNC machine tools. It is critical to obtain the thermal errors of a precision
machine tools in real-time. In this paper, based on Dynamic Bayesian Network (DBN), a pioneering modeling
method applied in thermal error research is presented. The dependence of thermal error and temperature field is
clearly described by graph theory, and the fuzzy classification method is proposed to reduce the computational
complexity, then forming a new method for thermal error modeling of machine tools.
Diagnosis of Faulty Sensors in Antenna Array using Hybrid Differential Evolut...IJECEIAES
In this work, differential evolution based compressive sensing technique for detection of faulty sensors in linear arrays has been presented. This algorithm starts from taking the linear measurements of the power pattern generated by the array under test. The difference between the collected compressive measurements and measured healthy array field pattern is minimized using a hybrid differential evolution (DE). In the proposed method, the slow convergence of DE based compressed sensing technique is accelerated with the help of parallel coordinate decent algorithm (PCD). The combination of DE with PCD makes the minimization faster and precise. Simulation results validate the performance to detect faulty sensors from a small number of measurements.
This document summarizes three parallel algorithms for solving geometric problems on coarse-grained multicomputers:
1. The implicit row maxima problem on a totally monotone n x n matrix can be solved in O((n/p)logp) time if n >= p^2.
2. The all-farthest-neighbors problem for a convex n-gon can be solved in O(n/pp) time if n >= 4p^2.
3. The maximum-perimeter triangle inscribed in a convex n-gon can be found in O(nlogn/pp) time if n >= p^2.
In this paper, low linear architectures for analyzing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. The min-sum giving out step is to that it produces only two diverse output magnitude values irrespective of the number of incoming bit-to check communication. These new micro-architecture structures would utilize the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption. The decoding algorithms we propose generalize and unify the decoding schemes originally presented the product codes and those of low-density parity-check codes.
Reduced Energy Min-Max Decoding Algorithm for Ldpc Code with Adder Correction...ijceronline
In this paper, high linear architectures for analysing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. We proposed the adder and LDPC. The min-sum processing step that it gives only two different output magnitude values irrespective of the number of incoming bit-to check messages. These new micro-architecture layouts would employ the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption.
A Modified Dna Computing Approach To Tackle The Exponential Solution Space Of...ijfcstjournal
This document summarizes a research paper that proposes a modified DNA computing approach to solve the graph coloring problem. The approach colors the vertices of a graph one by one, expanding DNA strands encoding promising solutions at each step and discarding infeasible solutions. This avoids an exponential number of DNA strands by not expanding the entire solution space at once. The approach is simulated on a sample 12-vertex graph, requiring fewer DNA strands than typical approaches. The modified approach provides an improvement over previous DNA computing methods for graph coloring by reducing the solution space in a step-wise manner.
A Design and Solving LPP Method for Binary Linear Programming Problem Using D...ijistjournal
This document describes a method for solving binary linear programming problems using DNA computing. It involves representing the variables and constraints of the problem as DNA strands. All possible combinations of 0s and 1s for the variables are generated as DNA sequences. Constraints are applied by adding complementary strands tagged with fluorescent markers. Strands that satisfy the constraints will hybridize, revealing feasible solutions. The value of the objective function is then calculated for each feasible solution to determine the optimal one. A hypothetical example problem is worked through step-by-step to demonstrate the method.
A Design and Solving LPP Method for Binary Linear Programming Problem Using D...ijistjournal
Molecular computing is a discipline that aims at harnessing individual molecules for computational purposes. This paper presents the applied Mathematical sciences using DNA molecules. The Major achievements are outlined the potential advances and the challenges for the practitioners in the foreseeable future. The Binary Optimization in Linear Programming is an intensive research area in the field of DNA Computing. This paper presents a research on design and implementation method to solve an Binary Linear Programming Problem using DNA computing. The DNA sequences of length directly represent all possible combinations in different boxes. An Hybridization is performed to form double strand molecules according to its length to visualize the optimal solution based on fluorescent material . Here Maximization Problem is converted into DNA computable form and a complementary are found to solve the problem and the optimal solution is suggested as per the constraints stipulated by the problem.
K-mer frequency statistics of biological sequences is a very important and important problem in biological information processing. This paper addresses the problem of index k-mer for large scale data reading DNA sequences in a limited memory space and time. Using the hash algorithm to establish index, the index model is set up to base pairing, and get the length of k-mer statistic information quickly, so as to avoid
searching all the sequences of the index. At the same time, the program uses hash table to establish index and build search model, and uses the zipper method to resolve the conflict in the case of address conflict.Algorithm of time complexity analysis and experimental results show that compared with the traditional indexing methods, the algorithm of the performance improvement is obvious, and very suitable for to be used in the k-mer length change with a wide range .
A Bibliography on the Numerical Solution of Delay Differential Equations.pdfJackie Gold
This document is a bibliography containing references to papers and technical reports on numerical methods for solving delay differential equations. It includes 58 references divided into sections on dense-output methods for ordinary differential equations, numerical methods for delay differential equations, dynamics and stability of delay differential equations, applications of delay differential equations, and functional differential equations. The bibliography aims to provide an introduction to earlier works in the field as well as more recent publications that are available through online search tools.
A Study on DNA based Computation and Memory DevicesEditor IJCATR
The present study delineates Deoxyribonucleic Acid (DNA) based computing and storage devices which have good future in the vast era of information technology. The traditional devices mostly used are made up of silicon. The devices are costly and have physical limitations to cause leakage of electrons and circuit to shorten. So, there is a need of materials which are capable of doing fast processing and have vast memory storage. DNA which is a bio-molecule has all these characteristics capable of providing ample storage. In classical computing devices, electronic logic gates are elements which allow storing and transforming of information. Designing of an appropriate sequence or a net of “store” and “transform” operations (in a sense of building a device or writing a program) is equivalent to preparing some computations. In DNA based computation, the situation is analogous. The main difference is the type of computing devices since in this new method of computing instead of electronic gates, DNA molecules have been deployed for the processing of dossier. Moreover, the inherent massive parallelism of DNA computing may lead to methods solving some intractable computational problems. The aim of this research study is to analyze the logical features and memory formation using DNA bio molecules in order to achieve proliferated speed, accuracy and vast storage.
DCE: A NOVEL DELAY CORRELATION MEASUREMENT FOR TOMOGRAPHY WITH PASSIVE REAL...ijdpsjournal
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
A simple and fast exact clustering algorithm defined for complexAlexander Decker
This document proposes a new clustering method for complex networks based on prime numbers. The method defines clusters of nodes that have the same number of input/output connections and paths of equal length. It encodes the complete paths of each node with prime numbers to calculate a unique CPS-code for that node. Nodes with the same CPS-code are grouped into the same cluster. The algorithm was tested on networks with up to 500 nodes and showed fast performance, analyzing networks in seconds or minutes. The simple method allows classification of nodes in complex networks for applications across different scientific fields.
A simple and fast exact clustering algorithm defined for complexAlexander Decker
This document proposes a new clustering method for complex networks based on prime numbers. It defines nodes clusters as groups of nodes with the same complete path sets (CPS), where a complete path is a path from a node that includes each other node only once. Each complete path is assigned a prime number as a code based on its length. A node's CPS-code is the product of the codes for all its complete paths. Nodes are clustered together if they have the same or similar CPS-codes. The method was tested on a network with 500 nodes and over 100,000 connections, which was clustered in 80 seconds.
An Optimized Parallel Algorithm for Longest Common Subsequence Using Openmp –...IRJET Journal
This document summarizes research on developing parallel algorithms to optimize solving the longest common subsequence (LCS) problem. LCS is commonly used for sequence comparison in bioinformatics. Traditional sequential dynamic programming algorithms have complexity of O(mn) for sequences of lengths m and n. The document reviews parallel algorithms developed using tools like OpenMP and GPUs like CUDA to reduce computation time. It proposes the authors' own optimized parallel algorithm for multi-core CPUs using OpenMP.
Solving N-queen Problem Using Genetic Algorithm by Advance Mutation Operator IJECEIAES
N-queen problem represents a class of constraint problems. It belongs to set of NP-Hard problems. It is applicable in many areas of science and engineering. In this paper N-queen problem is solved using genetic algorithm. A new genetic algoerithm is proposed which uses greedy mutation operator. This new mutation operator solves the N-queen problem very quickly. The proposed algorithm is applied on some instances of N-queen problem and results outperforms the previous findings.
One of the important steps in routing is to find a feasible path based on the state information. In order to support real-time multimedia applications, the feasible path that satisfies one or more constraints has to be computed within a very short time. Therefore, the paper presents a genetic algorithm to solve the paths tree problem subject to cost constraints. The objective of the algorithm is to find the set of edges connecting all nodes such that the sum of the edge costs from the source (root) to each node is minimized. I.e. the path from the root to each node must be a minimum cost path connecting them. The algorithm has been applied on two sample networks, the first network with eight nodes, and the last one with eleven nodes to illustrate its efficiency.
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...IOSRJVSP
This document presents a neural network approach to channel equalization using a multilayer perceptron with a variable learning rate parameter. Specifically, it proposes modifying the backpropagation algorithm to allow the learning rate to adapt at each iteration in order to achieve faster convergence. The equalizer structure is a decision feedback equalizer modeled as a neural network with an input, hidden and output layer. Simulation results show the proposed variable learning rate approach improves bit error rate and convergence speed compared to a standard backpropagation algorithm.
This summary provides the key points about the document in 3 sentences:
The document presents a method for obtaining the exact probability of error for block codes using soft-decision decoding and the eigenstructure of the code correlation matrix. It shows that under a unitary transformation, the performance evaluation of a block code becomes a one-dimensional problem involving only the dominant eigenvalue and its corresponding eigenvector. Simulation results demonstrate good agreement with the analysis, validating the method for computing the bit error rate of block codes based on the properties of the code correlation matrix.
Application of thermal error in machine tools based on Dynamic Bayesian NetworkIJRES Journal
In recent years, the growing interest toward complex manufacturing on machine tools and the
machining accuracy have solicited new efforts in the area of modeling and analysis of machine tools machining
errors. Therefore, the mathematical model study on the relationship between temperature field and thermal error
is the core content, which can improve the precision of parts processing and the thermal stability, also predict
and compensate machining errors of CNC machine tools. It is critical to obtain the thermal errors of a precision
machine tools in real-time. In this paper, based on Dynamic Bayesian Network (DBN), a pioneering modeling
method applied in thermal error research is presented. The dependence of thermal error and temperature field is
clearly described by graph theory, and the fuzzy classification method is proposed to reduce the computational
complexity, then forming a new method for thermal error modeling of machine tools.
Diagnosis of Faulty Sensors in Antenna Array using Hybrid Differential Evolut...IJECEIAES
In this work, differential evolution based compressive sensing technique for detection of faulty sensors in linear arrays has been presented. This algorithm starts from taking the linear measurements of the power pattern generated by the array under test. The difference between the collected compressive measurements and measured healthy array field pattern is minimized using a hybrid differential evolution (DE). In the proposed method, the slow convergence of DE based compressed sensing technique is accelerated with the help of parallel coordinate decent algorithm (PCD). The combination of DE with PCD makes the minimization faster and precise. Simulation results validate the performance to detect faulty sensors from a small number of measurements.
This document summarizes three parallel algorithms for solving geometric problems on coarse-grained multicomputers:
1. The implicit row maxima problem on a totally monotone n x n matrix can be solved in O((n/p)logp) time if n >= p^2.
2. The all-farthest-neighbors problem for a convex n-gon can be solved in O(n/pp) time if n >= 4p^2.
3. The maximum-perimeter triangle inscribed in a convex n-gon can be found in O(nlogn/pp) time if n >= p^2.
In this paper, low linear architectures for analyzing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. The min-sum giving out step is to that it produces only two diverse output magnitude values irrespective of the number of incoming bit-to check communication. These new micro-architecture structures would utilize the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption. The decoding algorithms we propose generalize and unify the decoding schemes originally presented the product codes and those of low-density parity-check codes.
Reduced Energy Min-Max Decoding Algorithm for Ldpc Code with Adder Correction...ijceronline
In this paper, high linear architectures for analysing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. We proposed the adder and LDPC. The min-sum processing step that it gives only two different output magnitude values irrespective of the number of incoming bit-to check messages. These new micro-architecture layouts would employ the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption.
Similar to Truncated Boolean Matrices for DNA Computation (20)
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
1. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.4, No.5, October 2014
DOI : 10.5121/ijcsea.2014.4501 1
TRUNCATED BOOLEAN MATRICES FOR DNA
COMPUTATION
Nordiana Rajaee1
, Awang Ahmad Sallehin Awang Hussaini2
and Sharifah
Masniah Wan Masra4
1
Faculty of Engineering, Universiti Malaysia Sarawak
2
Faculty of Resource Science and Technology, Universiti Malaysia Sarawak
3
Faculty of Engineering, Universiti Malaysia Sarawak
ABSTRACT
Although DNA computing has emerged as a new computing paradigm with its massive parallel computing
capabilities, the large number of DNA required for larger size of computational problems still remain as a
stumbling block to its development as practical computing. In this paper, we propose a modification to
implement a physical experimentation of two Boolean matrices multiplication problem with DNA
computing. The Truncated Matrices reduces the number of DNA sequences and lengths utilized to compute
the problem with DNA computing.
KEYWORDS
DNA computation, Boolean Matrices, Bio-molecular tools, Parallel Overlap Assembly
1. INTRODUCTION
When Leonard M Adleman first proposed the use of DNA for computation in solving the
Hamiltonian Path Problem (HPP) in 1994, the computation was implemented in an in-vitro
experimentation with designed DNA oligonucleotide sequences to represent the vertices and the
edges. The solution to the computation was then derived from the chemical reactions via bio-
molecular tools such as hybridication-ligation method, polymerase chain reaction and cutting by
restriction enzymes. The output was then visualized in gel electrophoresis process. The
computation of seven-node HPP took seven days to complete [1]. Since then, many proposals
were presented to compute problems with DNA computation but most of them still rely on the
L.M Adleman’s architecture to carry out the computation. Although the massive parallel
computing capabilities of DNA computing promises faster and denser computation there remain
several drawbacks which prevent it from becoming a practical computing material. One reason is
the exponential requirement of DNA in computing larger size of computational problem [2].
Current strategy in DNA computing is to embed the computation problems in the DNA
oligonucleotides sequences and derive the solution by eliminating incorrect DNA via selective
processes. For a seven-node HPP, the problem was encoded in a 20 oligonucleotide sequence.
For a 23-node HPP, the computation will require 1 kg of DNA and for a 70-node HPP, the
computation will require 1025
kg of DNA to represent all the nodes [3]. Other problems such as
maximal clique problems, vertex-cover problems and set packaging problems all show similarly
exponential requirement of DNA and increased time for the computations. LaBean et al (2000)
proposed that an1.89n
volume, O (n2
+m2
) time molecular algorithm for the 3-coloring problem
2. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.4, No.5, October 2014
2
and a 1.51n
volume, O (n2
m2
) time molecular algorithm for the independent set problem, where n
and m are, subsequently, the number of vertices and the number of edges in the problems resolved
[4]. Fu (1997) presented a polynomial time algorithm with 1.497n
volume for the 3-SAT
problem, a polynomial time algorithm with a 1.345n
volume for the 3-coloring problem and a
polynomial time algorithm with a 1.229n
volume for the independent set [5]. Bunow goes on to
estimate that an extension combinatorial database would require nearly 1070
nucleotides (by
comparison, the universe is estimated to contain roughly 1080
subatomic particles) [7].
The original algorithm to solve a Boolean matrix multiplication with DNA requires the generation
of initial vertices, intermediate vertices, terminal vertices and directed edges to link the initial-
intermediate / intermediate-terminal vertices. In our previous works in solving Boolean Matrices
with DNA computing, the quantity of DNA oligonucleotides to encode the problem is
proportionate to the number of vertices and edges existing in the graph problem representing the
matrix multiplication. The number of primers to represent the elements in the product matrix is
derived from its total number of row and column indicators whereas the total tubes to represent
each element in the product matrix is derived from the total number of primer combinations [7-9].
For a 2 x 2 product matrix, the total number of primers required is 4 and the total number of tubes
is also 4. In other words, for an (m x k) • (k x n) matrix multiplication problem, the total number
of primers is m + n and total number of tubes is m x n is required to represent the problem.
Given any matrix multiplication problem with increasing N number of intermediate matrices, the
number of intermediate vertices for the problem also increases (though not necessarily the
number of test tubes representing the product matrix). For example, consider matrix problems
with pth power, a (10 x 10)2
matrix and a (10 x 10)10
matrix multiplication. The number of test
tubes representing both problems is 100 but the number of intermediate vertices is 10 (for p = 2)
and 90 (for p = 10). The number of primers and tubes increases also drastically for a larger N x N
computation. For a 10 x 10 product matrix, the total number of primers required is 20 and the
total number of tubes to represent all elements in the product matrix is 100 as shown in Figure 1.
As the size of the problem increases, the volume of DNA increases exponentially and the number
of experimental work becomes tedious and impractical to be considered as a viable technology.
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
X =
10 x 10 10 x 10 10 x 10
X
. . . .
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
10 x 10
X
10 個
p = 10
Figure 1. (10 x 10)10
matrix multiplication
2. TRUNCATED BOOLEAN MATRICES
In in-vitro implementation of Boolean Matrix Multiplication problems, DNA oligonucleotide
strands are synthesized to represent the initial vertices, intermediate vertices, terminal vertices and
edges representing the problem. All generated single stranded sequences are quoted in 5’-3’
3. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.4, No.5, October 2014
3
order and length is denoted in mer, in which one mer represents one DNA oligonucleotide. The
output of the computation is analysed quantitatively based on the direct proportionality of the
DNA sequence strands.
Let us assume a Boolean matrix multiplication problem as shown in Figure 2.
0
1
1
0
c d
a
b
1
1
0
0
e f
c
d
1
0
0
1
g h
e
f
0
0
1
1
i j
g
h
0
0
1
1
i j
a
b
=
1st 2nd 3rd 4th
a
b
c
d
e
f
g
h
i
j
a
b
i
j
=
Figure 2. Four Boolean Matrix Multiplication and its graph representation
Using the original algorithm, the problem is represented by a directed graph G where the total
DNA strands generated for the problem is 18 as shown in Table 1.
Table 1 Number of DNA sequence strands for vertices and edges
DNA strands sequences for vertices and edges Number of
DNA strands
Initial vertices = {Va, Vb} 2
Intermediate vertices = { Vc, Vd, Ve, Vf, Vg , Vh} 6
Terminal vertices = {Vi, Vj } 2
Directed edges = {Ead, Ebc, Ece, Ede, Eeg, Efh, Egj, Ehj} 8
Total: 18
In case of cube matrices whereby the number of rows and columns for all matrices in the
multiplication are equal, we propose a modification in order to reduce the generation of vertices
with Truncated Matrix. The main idea of truncating matrices is to eliminate or reduce the
generation of vertices by replacing them directly with directed edges. Originally, the directed
edge for element of value 1 in the matrices is constructed from partial complementary strands
from a previous vertex to a next vertex. With Truncated Matrices, edges are constructed directly
from combination of sequences for rows and columns containing an element value of 1.
Thus, the problem in Figure 2 is modelled such that each row and column indicator for all the
matrices refers to a library containing pre-defined sequences for a vertex label. Table 2 shows the
library of generated DNA strand sequences for the vertex labels.
4. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.4, No.5, October 2014
4
Table 2 DNA Sequence for Vertex Labels
Sequence Complementary Sequence
a cgatggcgtg gctaccgcac
b cccgagcgtt gggctcgcaa
c ggacagccct cctgtcggga
d tgccgtagcg acggcatcgc
e caacggtggc gttgccaccg
f ctctcaggcg gagagtccgc
g gctcctgggg cgaggacccc
h gagggcgtcg ctcccgcagc
i cgatggcgtg gctaccgcac
j ggggcggaat ccccgcctta
The edges constructed to represent the problem are shown in Table 3 and Table 4 for odd and
even matrices existing in the problem. SeqLen represents the sequence length of the strands,
GC% represent the GC content in the sequence strands which may influence the stability of the
strands, and Tm refers to the melting temperature.
For matrix = Odd:
Generate edge sequences for all elements of value 1 from corresponding row and column
sequences. From the 1st matrix, elements of value 1 exist for intersections of row a – column d
and row b – column c. From the 3rd matrix, elements of value 1 exist for intersections of row e –
column g and row f – column h. Therefore, the generated edges for the odd matrices are as shown
in Table 3:
Table 3 DNA Sequence for Edges (Matrix: Odd)
Edges Sequence SeqLen GC% Tm
ad cgatggcgtgtgccgtagcg 20 0.70 74.5
bc cccgagcgttggacagccct 20 0.70 73.3
eg caacggtggcgctcctgggg 20 0.75 76.5
fh ctctcaggcggagggcgtcg 20 0.75 74.2
For matrix = Even:
Generate complementary edge sequences for all elements of value 1 from corresponding row and
column sequences. From the 2nd matrix, elements of value 1 exist for intersections of row c –
column e and row d – column e. From the 4th matrix, elements of value 1 exist for intersections
of row g – column j and row h – column j. Therefore, the generated edges for the even matrices
are as shown in Table 4:
5. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.4, No.5, October 2014
5
Table 4 DNA Sequence for Edges (Matrix: Even)
Edges Sequence SeqLen GC% Tm
ce cctgtcgggagttgccaccg 20 0.70 72.8
de acggcatcgcgttgccaccg 20 0.70 77.8
gj cgaggaccccccccgcctta 20 0.75 75.2
hj ctcccgcagcccccgcctta 20 0.75 76.4
Figure 3 shows the constructed edges for the Truncated Matrix.
a d
b c
c e
d e
e g
f h
g j
h j
a d
d e
e g
g j
b c
c e
e g
g j
Figure 3 Constructed Edges for Truncated Matrix
3. EXPERIMENTAL RESULTS
Generated single stranded DNA sequences representing the constructed edges are poured into a
single test tube T0 for generation of initial pool containing all possible solutions. Parallel Overlap
Assembly method is utilized to allow the individual DNA strands to assemble and hybridize to
their complementary strands. The strands will be extended with each denaturing, annealing and
elongation cycles. At the end of the cycles, a formed path for solution of the problem will be
constructed which can be extracted as output of the computation. The formed path consists of
sequences containing initial vertex and terminal vertex sequences. We then allocate test tubes T1,
T2, T3 and T4 to represent the intersections of row a – column i, row a – column j, row b –
column i and row b – column j in the product matrix respectively. Contents of test tube T0 after
the POA process are distributed into each test tube T1 – T4. Gel electrophoresis process is
utilized to visualize the output of the computation. The gel electrophoresis is a technique which
separates the DNA solution based on their size or sequence lengths. Therefore, the lengths of the
formed paths in the test tubes T1 – T4 can be determined. From the captured image of gel
electrophoresis process in Figure 4, the most left lane is the marker image for sequence lengths
which are in base pair (b.p) unit for the constructed double stranded sequences of formed paths.
From the gel electrophoresis, test tubes T2 and T4 yield 50 b.p bands which are consistent with
constructed paths of elements with value 1 in the product matrix representing the formed path
from vertices a – j and b – j. In test tubes T1 and T3, highlighted bands show 40 b.p and 30 b.p
strands indicating incomplete paths.
6. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.4, No.5, October 2014
6
Marker
T4 T3 T2 T1
50 b.p
40 b.p
30 b.p
Figure 4 Results from gel electrophoresis process
3. CONCLUSIONS
In this work, we propose Truncated Matrices to reduce the material consumption in DNA
computing. Using the original algorithm, assuming all 18 DNA sequence strands in Table 1 as set
as 20 mer length sequences, the output of the computation will be of length 100 b.p. which is
directly proportional to its sequence length from the initial to terminal vertex. However, using 20
mer length strands in Truncated Matrices will only generate a 50 b.p length output of formed path
solution. The number of DNA strands for the computation is also reduced to 8 as shown in Table
3 and 4. The major constraints to successful implementation of the experiments come from in the
instability of the generated DNA sequence strands. While simulated DNA sequences
theoretically in compliance to expected temperatures and other parameters, the problem comes
when actual DNA sequence strands differ in their compositions.
ACKNOWLEDGEMENTS
The authors would like to thank Universiti Malaysia Sarawak for the support.
REFERENCES
[1] Adleman L, (1994), “Molecular computation of solutions to combinatorial problems”, Science, vol.
266, no. 5187, pp. 1021-1024.
[2] Amos M, (2004), Theoretical and Experimental DNA Computation, Natural Computing Series,
Springer.
[3] Cox J C, Cohen D S & Ellington A D, (1999), “The complexities of DNA computation”, Trends
Biotechnology, 171, pp 151-154.
[4] LaBean, Reif M C & Seeman T H, (2003), “Logical computation using algorithmic self-assembly of
DNA triple-crossover molecules”, Nature, vol. 407, pp 493-496.
[5] Fu B & Beigel R, (1997), “A Comparison of Resource Bounded Molecular Computation Models, in
Proceedings of the 5th Israel Symposium on Theory of Computing and Systems, pp 6-11.
[6] Bunow, (1995), “On the potential of molecular computing,” Science, vol. 268, pp. 482-483.
[7] Rajaee N & Ono O, (2007), “Boolean Matrix Implementation with DNA computing”, Proceedings of
International Conference on Robotics, Vision, Information and Signal Processing (ROVISP), pp 36-
39
7. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.4, No.5, October 2014
7
[8] Rajaee N & Ono O, (2010), “ Experimental Implementation of Boolean Matrix Multiplication with
DNA Computing using Parallel Overlap Assembly”, International Journal of Innovative Computing,
Information and Control (IJICIC) ,vol. 6, no.2, pp. 591 - 599
[9] Rajaee N, Aoyagi H & Ono O, (2010), “Single Stranded DNA and Primers for Boolean Matrix
Multiplication with DNA Computing”, ICIC-EL International Journal of Research and Surveys
ISNN1811-803X, pp 1067-1072
[10] Murphy R C, Deaton R, Stevens S E & Franceschetti R, (1997), “A new algorithm for DNA based
computation”, Proceedings of the IEEE International Conference on Evolutionary Computation, pp
207-212
[11] Kaplan P D, Ouyang Q, Thaler D S & Libchaber A, (1997), , “Parallel Overlap Assembly for the
construction of computational DNA libraries”, Journal of Theoretical Biology, vol. 188, pp 333-341
[12] Boneh D, Dunworth C & Sgall J, (1996), “On the computational power of DNA”, Discrete Applied
Mathematics, vol. 71, pp 79-94.
[13] Ogihara M & Ray A, (1996), “Simulating Boolean circuits on a DNA computer”, Technical Report
TR 631, University of Rochester.
Authors
Nordiana Rajaee received her BEng (Hons) from Electronic and Information Engineering
from Kyushu Institute of Technology, Fukuoka, Japan in 1999, her MSc in Microelectronics
in University of Newcastle Upon Tyne, United Kingdom in 2003 and PhD in DNA
Computing from Meiji University, Japan in 2011.
Awang Ahmad Sallehin Awang Husaini received his BSc (Hons.) in Biochemistry and
Microbiology from Universiti Putra Malaysia (UPM), Selangor, Malaysia in 1996 and PhD
in Molecular Biology (Enzymology) from University of Manchester, Manchester, United
Ki ngdom in 2001.
Sharifah Masniah Wan Masra received her BEng (Hons) in Telecommunication from
Universiti Malaya, Kuala Lumpur, Malaysia in 2000 and MSc in Electronic from Cardiff
University, Wales, UK in 2003