Motif discovery in DNA sequences is one of the most important issues in bioinformatics. Thus, algorithms for dealing with the problem accurately and quickly have always been the goal of research in bioinformatics. Therefore, this study is intended to modify the random projection algorithm to be implemented on R high performance computing (i.e., the R package pbdMPI). Some steps are needed to achieve this objective, ie preprocessing data, splitting data according to number of batches, modifying and implementing random projection in the pbdMPI package, and then aggregating the results. To validate the proposed approach, some experiments have been conducted. Several benchmarking data were used in this study by sensitivity analysis on number of cores and batches. Experimental results show that computational cost can be reduced, which is that the computation cost of 6 cores is faster around 34 times compared with the standalone mode. Thus, the proposed approach can be used for motif discovery effectively and efficiently.
Comparison of search algorithms in Javanese-Indonesian dictionary applicationTELKOMNIKA JOURNAL
This study aims to compare the performance of Boyer-Moore, Knuth morris pratt, and Horspool algorithms in searching for the meaning of words in the Java-Indonesian dictionary search application in terms of accuracy and processing time. Performance Testing is used to test the performance of algorithm implementations in applications. The test results show that the Boyer Moore and Knuth Morris Pratt algorithms have an accuracy rate of 100%, and the Horspool algorithm 85.3%. While the processing time, Knuth Morris Pratt algorithm has the highest average speed level of 25ms, Horspool 39.9 ms, while the average speed of the Boyer Moore algorithm is 44.2 ms. While the complexity test results, the Boyer Moore algorithm has an overall number of n 26n2, Knuth Morris Pratt and Horspool 20n2 each.
Tail-biting convolutional codes (TBCC) have been extensively applied in communication systems. This method is implemented by replacing the fixedtail with tail-biting data. This concept is needed to achieve an effective decoding computation. Unfortunately, it makes the decoding computation becomes more complex. Hence, several algorithms have been developed to overcome this issue in which most of them are implemented iteratively with uncertain number of iteration. In this paper, we propose a VLSI architecture to implement our proposed reversed-trellis TBCC (RT-TBCC) algorithm. This algorithm is designed by modifying direct-terminating maximumlikelihood (ML) decoding process to achieve better correction rate. The purpose is to offer an alternative solution for tail-biting convolutional code decoding process with less number of computation compared to the existing solution. The proposed architecture has been evaluated for LTE standard and it significantly reduces the computational time and resources compared to the existing direct-terminating ML decoder. For evaluations on functionality and Bit Error Rate (BER) analysis, several simulations, System-on-Chip (SoC) implementation and synthesis in FPGA are performed.
This document compares the performance of two neural network architectures, multi-layer perceptron (MLP) and radial basis function (RBF) networks, on a face recognition system. It trains MLP networks using different variants of the backpropagation algorithm and compares the results to RBF networks. The document finds that RBF networks provide better generalization performance compared to backpropagation algorithms and have faster training times, making them more suitable for face recognition.
The document discusses network layer protocols and IPv4 specifically. It provides three key points:
1) IPv4 is the main network layer protocol in the Internet that provides "best effort" delivery of packets called datagrams from source to destination through various networks in a connectionless manner.
2) IPv4 packets, or datagrams, contain a header with fields that provide routing information and a payload section for data. The header fields include source and destination addresses, identification information, flags for fragmentation, and more.
3) IPv4 supports fragmentation of large datagrams into smaller pieces to accommodate the size constraints of different networks. The fragmentation process and header fields related to fragmentation are described.
A Novel Framework for Short Tandem Repeats (STRs) Using Parallel String MatchingIJERA Editor
This document presents a novel framework for identifying Short Tandem Repeats (STRs) using parallel string matching. It begins with background on STRs and challenges with existing sequential algorithms. It then describes a two-phase methodology - first applying a basic improved right prefix algorithm sequentially, then applying it in parallel using multi-threading on multicore processors. Results show the basic algorithm outperforms Boyer-Moore, Knuth-Morris-Pratt and brute force algorithms sequentially. When applied in parallel, processing time is reduced from 80ms sequentially to 40ms in parallel on multicore systems. The parallel STR identification framework allows efficient searching of repeats in large genomes.
1) The document presents a dependency-to-string translation model for a Chinese-Japanese statistical machine translation system.
2) The system achieves a BLEU score of 34.87 and a RIBES score of 79.25 on the Chinese-Japanese translation task, outperforming a baseline PBSMT system.
3) The dependency-to-string model uses two types of translation rules - HDR rules with generalized dependency fragments on the source side and strings on the target side, and H rules with single words on the source side.
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...IJCSEA Journal
Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data.The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than arithmetic coding. In addition, implementation of Huffman coding is much easier than the arithmetic coding.
- The PR approach consistently outperforms other approaches like RMA and MT
- Using PR, a speedup of 2.17x was achieved on 1008 processors for NWChem over other MPI designs
- Evaluation was performed on communication benchmarks, kernels, and full NWChem application to compare design approaches
Comparison of search algorithms in Javanese-Indonesian dictionary applicationTELKOMNIKA JOURNAL
This study aims to compare the performance of Boyer-Moore, Knuth morris pratt, and Horspool algorithms in searching for the meaning of words in the Java-Indonesian dictionary search application in terms of accuracy and processing time. Performance Testing is used to test the performance of algorithm implementations in applications. The test results show that the Boyer Moore and Knuth Morris Pratt algorithms have an accuracy rate of 100%, and the Horspool algorithm 85.3%. While the processing time, Knuth Morris Pratt algorithm has the highest average speed level of 25ms, Horspool 39.9 ms, while the average speed of the Boyer Moore algorithm is 44.2 ms. While the complexity test results, the Boyer Moore algorithm has an overall number of n 26n2, Knuth Morris Pratt and Horspool 20n2 each.
Tail-biting convolutional codes (TBCC) have been extensively applied in communication systems. This method is implemented by replacing the fixedtail with tail-biting data. This concept is needed to achieve an effective decoding computation. Unfortunately, it makes the decoding computation becomes more complex. Hence, several algorithms have been developed to overcome this issue in which most of them are implemented iteratively with uncertain number of iteration. In this paper, we propose a VLSI architecture to implement our proposed reversed-trellis TBCC (RT-TBCC) algorithm. This algorithm is designed by modifying direct-terminating maximumlikelihood (ML) decoding process to achieve better correction rate. The purpose is to offer an alternative solution for tail-biting convolutional code decoding process with less number of computation compared to the existing solution. The proposed architecture has been evaluated for LTE standard and it significantly reduces the computational time and resources compared to the existing direct-terminating ML decoder. For evaluations on functionality and Bit Error Rate (BER) analysis, several simulations, System-on-Chip (SoC) implementation and synthesis in FPGA are performed.
This document compares the performance of two neural network architectures, multi-layer perceptron (MLP) and radial basis function (RBF) networks, on a face recognition system. It trains MLP networks using different variants of the backpropagation algorithm and compares the results to RBF networks. The document finds that RBF networks provide better generalization performance compared to backpropagation algorithms and have faster training times, making them more suitable for face recognition.
The document discusses network layer protocols and IPv4 specifically. It provides three key points:
1) IPv4 is the main network layer protocol in the Internet that provides "best effort" delivery of packets called datagrams from source to destination through various networks in a connectionless manner.
2) IPv4 packets, or datagrams, contain a header with fields that provide routing information and a payload section for data. The header fields include source and destination addresses, identification information, flags for fragmentation, and more.
3) IPv4 supports fragmentation of large datagrams into smaller pieces to accommodate the size constraints of different networks. The fragmentation process and header fields related to fragmentation are described.
A Novel Framework for Short Tandem Repeats (STRs) Using Parallel String MatchingIJERA Editor
This document presents a novel framework for identifying Short Tandem Repeats (STRs) using parallel string matching. It begins with background on STRs and challenges with existing sequential algorithms. It then describes a two-phase methodology - first applying a basic improved right prefix algorithm sequentially, then applying it in parallel using multi-threading on multicore processors. Results show the basic algorithm outperforms Boyer-Moore, Knuth-Morris-Pratt and brute force algorithms sequentially. When applied in parallel, processing time is reduced from 80ms sequentially to 40ms in parallel on multicore systems. The parallel STR identification framework allows efficient searching of repeats in large genomes.
1) The document presents a dependency-to-string translation model for a Chinese-Japanese statistical machine translation system.
2) The system achieves a BLEU score of 34.87 and a RIBES score of 79.25 on the Chinese-Japanese translation task, outperforming a baseline PBSMT system.
3) The dependency-to-string model uses two types of translation rules - HDR rules with generalized dependency fragments on the source side and strings on the target side, and H rules with single words on the source side.
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression St...IJCSEA Journal
Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data.The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than arithmetic coding. In addition, implementation of Huffman coding is much easier than the arithmetic coding.
- The PR approach consistently outperforms other approaches like RMA and MT
- Using PR, a speedup of 2.17x was achieved on 1008 processors for NWChem over other MPI designs
- Evaluation was performed on communication benchmarks, kernels, and full NWChem application to compare design approaches
Limited Data Speaker Verification: Fusion of FeaturesIJECEIAES
The present work demonstrates experimental evaluation of speaker verification for dif- ferent speech feature extraction techniques with the constraints of limited data (less than 15 seconds). The state-of-the-art speaker verification techniques provide good performance for sufficient data (greater than 1 minutes). It is a challenging task to develop techniques which perform well for speaker verification under limited data condition. In this work different features like Mel Frequency Cepstral Coefficients (MFCC), Linear Prediction Cepstral Coefficients (LPCC), Delta (4), Delta-Delta (44), Linear Prediction Residual (LPR) and Linear Prediction Residual Phase (LPRP) are considered. The performance of individual features is studied and for better verification performance, combination of these features is attempted. A comparative study is made between Gaussian mixture model (GMM) and GMM-universal background model (GMM-UBM) through experimental evaluation. The experiments are conducted using NIST-2003 database. The experimental results show that, the combination of features provides better performance compared to the individual features. Further GMM-UBM modeling gives reduced equal error rate (EER) as compared to GMM.
Performance Efficient DNA Sequence DetectionalgoRahul Shirude
The document proposes a parallel string matching algorithm using CUDA to perform DNA sequence detection on a GPU. It aims to optimize GPU core utilization and achieve high performance gains over sequential approaches. The proposed algorithm finds correct pattern matches in DNA database sequences. Experimental results show the parallel GPU approach has much higher performance than sequential CPU methods.
The document proposes reducing the computational complexity of message passing algorithms like belief propagation (BP) and concave-convex procedure (CCCP) for multiuser detection in CDMA systems. It does this by changing the factor graph structure used to represent the detection problem from a fully connected graph (Factor Graph I) to a sparsely connected graph (Factor Graph II). Simulation results show the proposed CCCP detector for the new factor graph achieves near optimal performance with lower complexity than existing approaches.
Coding and Indexing Shape Feature using Golomb-Rice Coding for CBIR ApplicationsIJERDJOURNAL
ABSTRACT:- The Content-Based Image Retrieval application uses large amount of low-level features of images for retrieving relevant images. Numerous low-level features are used for retrieval and among them the shape based feature is important and can be represented in the form of histogram. It is well known that the low-level feature requires more storage space and thus entire content has to be coded, indexed etc. to reduce the storage requirement, feature processing time and accessing time. In this work, indexing and encoding mechanism is proposed to effectively access the feature and represent the feature in lesser space. The bin values are used as reference for reducing the dimension of the histogram. The non-zero values are coded using Golomb-Rice coding and represented as compact histogram. The size of the compact histogram is variable in size and thus a similarity measure is proposed to handle the issue. Various well-known benchmark datasets are used for experiments to evaluate the performance. It is found that the truncated and encoded histogram performs well and achieves high precision of retrieval.
NEURAL SYMBOLIC ARABIC PARAPHRASING WITH AUTOMATIC EVALUATIONcscpconf
We present symbolic and neural approaches for Arabic paraphrasing that yield high paraphrasing accuracy. This is the first work on sentence level paraphrase generation for
Arabic and the first using neural models to generate paraphrased sentences for Arabic. We present and compare several methods for para- phrasing and obtaining monolingual parallel data. We share a large coverage phrase dictionary for Arabic and contribute a large parallel monolingual corpus that can be used in developing new seq-to-seq models for paraphrasing. This is the first large monolingual corpus of Arabic. We also present first results in Arabic
paraphrasing using seq-to-seq neural methods. Additionally, we propose a novel automatic evaluation metric for paraphrasing that correlates highly with human judgement.
An Offline Hybrid IGP/MPLS Traffic Engineering Approach under LSP ConstraintsEM Legacy
This document proposes a novel hybrid IGP/MPLS traffic engineering method based on genetic algorithms to handle long or medium-term traffic variations. The method treats the maximum number of hops an LSP may take and the number of LSPs applied solely to improve routing as constraints. Results comparing this hybrid approach to pure IGP routing and full mesh MPLS with and without flow splitting on the German scientific network and other networks are presented.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
This document presents a new model called EQUIRS (Explicitly Query Understanding Information Retrieval System) based on Hidden Markov Models (HMM) to improve natural language processing for text query information retrieval. The proposed EQUIRS system is compared to previous fuzzy clustering methods. Experimental results on a dataset of 900 files across 5 categories show that EQUIRS has higher accuracy than fuzzy clustering, as measured by precision, recall, F-measure, though it has longer training and searching times. The document concludes that EQUIRS is an effective approach for information retrieval based on HMM.
FPGA Implementation of Viterbi Decoder using Hybrid Trace Back and Register E...ijsrd.com
Error correction is an integral part of any communication system and for this purpose, the convolution codes are widely used as forward error correction codes. For decoding of convolution codes, at the receiver end Viterbi Decoder is being employed. The parameters of Viterbi algorithm can be changed to suit a specific application. The high speed and small area are two important design parameters in today’s wireless technology. In this paper, a high speed feed forward viterbi decoder has been designed using hybrid track back and register exchange architecture and embedded BRAM of target FPGA. The proposed viterbi decoder has been designed with Matlab, simulated with Xilinx ISE 8.1i Tool, synthesized with Xilinx Synthesis Tool (XST), and implemented on Xilinx Virtex4 based xc4vlx15 FPGA device. The results show that the proposed design can operate at an estimated frequency of 107.7 MHz by consuming considerably less resources on target device to provide cost effective solution for wireless applications.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Research of 64-bits RISC Dual-core Microprocessor with High Performance and L...TELKOMNIKA JOURNAL
A 64-bits RISC Dual-Core microprocessor with high performance and low power consumption is
presented in this paper. The processor has a symmetric architecture with two cores. Each of them has
three stage pipeline, 64-bit data-path and 64-bit address port. A novel shared register module, redundant
Booth3 algorithm and leapfrog Wallace tree architecture are introduced to the microprocessor, and both
the performance and power consumption of it has been improved enormously. As the FPGA simulation
result indicates, the power consumption is decreased by 14% and the longest data-path is shortened by
25%.
A minimization approach for two level logic synthesis using constrained depth...IAEME Publication
This document summarizes a research paper that proposes a new approach for two-level logic synthesis called constrained depth-first search. It generates all prime implicants of a Boolean function efficiently using cube absorption and a position cube notation representation. It then formulates the problem of finding an optimal sum-of-products cover as a combinatorial optimization problem. The key contribution is a constrained depth-first search of a virtual prime implicant decision tree that finds the minimum cover. Numerical results on benchmark functions show it produces results as good or better than state-of-the-art logic synthesis tools, often finding the greedy solution is already optimal or near-optimal.
This document summarizes a research paper that proposes a new framework called FinnMun for emulating spreadsheets. The paper introduces FinnMun and describes its implementation. It then discusses the experimental setup and results from evaluating FinnMun on various hardware configurations. The evaluation analyzes trends in metrics like throughput, response time, and hit ratio. The paper finds that FinnMun can successfully emulate spreadsheets and improve system performance. It concludes that FinnMun helps advance research on producer-consumer problems and complex systems.
The performance of speaker identification systems
decreases significantly under noisy conditions and especially
when there is a difference between the recognition and the
learning sessions. To improve robustness, we have proposed in
the previous study an auditory features and a robust speaker
recognition system using a front-end based on the combination of
MFCC and RASTA-PLP methods. In this paper, we further
study the auditory features by exploring the combination of
GFCC and RASTA-PLP. We find that the method performs
substantially better than all previous studied methods.
Furthermore, our current identification system achieves
significant performance improvements of 5.92% in a wide range
of signal-to-noise conditions compared with the last studied
front-end based on MFCC combined to RASTA-PLP.
Experimental results show an average accuracy improvement of
10.11% in case of GFCC combined with RASTA-PLP over the
base line MFCC thechnique across various SNR. This fusion
approach allow a highly and appreciable enhancement in the
higher noisy conditions.
DUAL FIELD DUAL CORE SECURE CRYPTOPROCESSOR ON FPGA PLATFORMVLSICS Design
This paper is devoted to the design of dual core crypto processor for executing both Prime field and binary field instructions. The proposed design is specifically optimized for Field programmable gate array (FPGA) platform. Combination of two different field (prime field GF(p) and Binary field GF(2m)) instructions execution is analysed.The design is implemented in Spartan 3E and virtex5. Both the performance results are compared. The implementation result shows the execution of parallelism using dual field instructions
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
This document presents an optimal general type-2 fuzzy controller (OGT2FC) for controlling traffic signal scheduling and phase succession to minimize wait times and average queue length. The OGT2FC uses a combination of general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) to optimize the membership function parameters. Simulation results show the OGT2FC performs better than conventional type-1 fuzzy controllers in regulating urban traffic flow.
An Application of Pattern matching for Motif IdentificationCSCJournals
Pattern matching is one of the central and most widely studied problem in theoretical computer science. Solutions to the problem play an important role in many areas of science and information processing. Its performance has great impact on many applications including database query, text processing and DNA sequence analysis. In general Pattern matching algorithms are based on the shift value, the direction of the sliding window and the order in which comparisons are made. The performance of the algorithms can be enhanced to a great extent by a larger shift value and less number of comparison to get the shift value. In this paper we proposed an algorithm, for finding motif in DNA sequence. The algorithm is based on preprocessing of the pattern string(motif) by considering four consecutive nucleotides of the DNA that immediately follow the aligned pattern window in an event of mismatch between pattern(motif) and DNA sequence .Theoretically, we found the proposed algorithms work efficiently for motif identification.
PERFORMANCE EVALUATIONS OF GRIORYAN FFT AND COOLEY-TUKEY FFT ONTO XILINX VIRT...cscpconf
A large family of signal processing techniques consist of Fourier-transforming a signal,manipulating the Fourier-transformed data in a simple way, and reversing the transformation.We widely use Fourier frequency analysis in equalization of audio recordings, X-ray crystallography, artefact removal in Neurological signal and image processing, Voice Activity Detection in Brain stem speech evoked potentials, speech processing spectrograms are used to identify phonetic sounds and so on. Discrete Fourier Transform (DFT) is a principal mathematical method for the frequency analysis. The way of splitting the DFT gives out various fast algorithms. In this paper, we present the implementation of two fast algorithms for the DFT for evaluating their performance. One of them is the popular radix-2 Cooley-Tukey fast Fourier transform algorithm (FFT) [1] and the other one is the Grigoryan FFT based on the splitting by the paired transform [2]. We evaluate the performance of these algorithms by implementing
them on the Xilinx Virtex-II pro [3] and Virtex-5 [4] FPGAs, by developing our own FFT processor architectures. Finally we show that the Grigoryan FFT is working fatser than
Cooley-Tukey FFT, consequently it is useful for higher sampling rates. Operating at higher
sampling rates is a challenge in DSP applications
Performance evaluations of grioryan fft and cooley tukey fft onto xilinx virt...csandit
A large family of signal processing techniques consist of Fourier-transforming a signal,
manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
We widely use Fourier frequency analysis in equalization of audio recordings, X-ray
crystallography, artefact removal in Neurological signal and image processing, Voice Activity
Detection in Brain stem speech evoked potentials, speech processing spectrograms are used to
identify phonetic sounds and so on. Discrete Fourier Transform (DFT) is a principal
mathematical method for the frequency analysis. The way of splitting the DFT gives out various
fast algorithms. In this paper, we present the implementation of two fast algorithms for the DFT
for evaluating their performance. One of them is the popular radix-2 Cooley-Tukey fast Fourier
transform algorithm (FFT) [1] and the other one is the Grigoryan FFT based on the splitting by
the paired transform [2]. We evaluate the performance of these algorithms by implementing
them on the Xilinx Virtex-II pro [3] and Virtex-5 [4] FPGAs, by developing our own FFT
processor architectures. Finally we show that the Grigoryan FFT is working fatser than
Cooley-Tukey FFT, consequently it is useful for higher sampling rates. Operating at higher
sampling rates is a challenge in DSP applications.
Genomic repeats detection using Boyer-Moore algorithm on Apache Spark Streaming TELKOMNIKA JOURNAL
Genomic repeats, i.e., pattern searching in the string processing process to find
repeated base pairs in the order of deoxyribonucleic acid (DNA), requires
a long processing time. This research builds a big-data computational
model to look for patterns in strings by modifying and implementing
the Boyer-Moore algorithm on Apache Spark Streaming for human DNA
sequences from the ensemble site. Moreover, we perform some experiments
on cloud computing by varying different specifications of computer clusters
with involving datasets of human DNA sequences. The results obtained show
that the proposed computational model on Apache Spark Streaming is faster
than standalone computing and parallel computing with multicore. Therefore,
it can be stated that the main contribution in this research, which is to develop
a computational model for reducing the computational costs, has been
achieved.
Instruction level parallelism using ppm branch predictionIAEME Publication
This document summarizes an approach to instruction level parallelism using prediction by partial matching (PPM) branch prediction. It proposes a hybrid PPM-based branch predictor that uses both local and global branch histories. The two predictors are combined using a neural network. Key aspects of the implementation include:
1. Using local and global history PPM predictors and combining their predictions with a neural network.
2. Enhancements to the basic PPM approach like program counter tagging, efficient history encoding using run-length encoding, tracking pattern bias, and dynamic pattern length selection.
3. Details of the global history PPM predictor including the use of tables and linked lists to store patterns of different lengths and handle collisions
Limited Data Speaker Verification: Fusion of FeaturesIJECEIAES
The present work demonstrates experimental evaluation of speaker verification for dif- ferent speech feature extraction techniques with the constraints of limited data (less than 15 seconds). The state-of-the-art speaker verification techniques provide good performance for sufficient data (greater than 1 minutes). It is a challenging task to develop techniques which perform well for speaker verification under limited data condition. In this work different features like Mel Frequency Cepstral Coefficients (MFCC), Linear Prediction Cepstral Coefficients (LPCC), Delta (4), Delta-Delta (44), Linear Prediction Residual (LPR) and Linear Prediction Residual Phase (LPRP) are considered. The performance of individual features is studied and for better verification performance, combination of these features is attempted. A comparative study is made between Gaussian mixture model (GMM) and GMM-universal background model (GMM-UBM) through experimental evaluation. The experiments are conducted using NIST-2003 database. The experimental results show that, the combination of features provides better performance compared to the individual features. Further GMM-UBM modeling gives reduced equal error rate (EER) as compared to GMM.
Performance Efficient DNA Sequence DetectionalgoRahul Shirude
The document proposes a parallel string matching algorithm using CUDA to perform DNA sequence detection on a GPU. It aims to optimize GPU core utilization and achieve high performance gains over sequential approaches. The proposed algorithm finds correct pattern matches in DNA database sequences. Experimental results show the parallel GPU approach has much higher performance than sequential CPU methods.
The document proposes reducing the computational complexity of message passing algorithms like belief propagation (BP) and concave-convex procedure (CCCP) for multiuser detection in CDMA systems. It does this by changing the factor graph structure used to represent the detection problem from a fully connected graph (Factor Graph I) to a sparsely connected graph (Factor Graph II). Simulation results show the proposed CCCP detector for the new factor graph achieves near optimal performance with lower complexity than existing approaches.
Coding and Indexing Shape Feature using Golomb-Rice Coding for CBIR ApplicationsIJERDJOURNAL
ABSTRACT:- The Content-Based Image Retrieval application uses large amount of low-level features of images for retrieving relevant images. Numerous low-level features are used for retrieval and among them the shape based feature is important and can be represented in the form of histogram. It is well known that the low-level feature requires more storage space and thus entire content has to be coded, indexed etc. to reduce the storage requirement, feature processing time and accessing time. In this work, indexing and encoding mechanism is proposed to effectively access the feature and represent the feature in lesser space. The bin values are used as reference for reducing the dimension of the histogram. The non-zero values are coded using Golomb-Rice coding and represented as compact histogram. The size of the compact histogram is variable in size and thus a similarity measure is proposed to handle the issue. Various well-known benchmark datasets are used for experiments to evaluate the performance. It is found that the truncated and encoded histogram performs well and achieves high precision of retrieval.
NEURAL SYMBOLIC ARABIC PARAPHRASING WITH AUTOMATIC EVALUATIONcscpconf
We present symbolic and neural approaches for Arabic paraphrasing that yield high paraphrasing accuracy. This is the first work on sentence level paraphrase generation for
Arabic and the first using neural models to generate paraphrased sentences for Arabic. We present and compare several methods for para- phrasing and obtaining monolingual parallel data. We share a large coverage phrase dictionary for Arabic and contribute a large parallel monolingual corpus that can be used in developing new seq-to-seq models for paraphrasing. This is the first large monolingual corpus of Arabic. We also present first results in Arabic
paraphrasing using seq-to-seq neural methods. Additionally, we propose a novel automatic evaluation metric for paraphrasing that correlates highly with human judgement.
An Offline Hybrid IGP/MPLS Traffic Engineering Approach under LSP ConstraintsEM Legacy
This document proposes a novel hybrid IGP/MPLS traffic engineering method based on genetic algorithms to handle long or medium-term traffic variations. The method treats the maximum number of hops an LSP may take and the number of LSPs applied solely to improve routing as constraints. Results comparing this hybrid approach to pure IGP routing and full mesh MPLS with and without flow splitting on the German scientific network and other networks are presented.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
This document presents a new model called EQUIRS (Explicitly Query Understanding Information Retrieval System) based on Hidden Markov Models (HMM) to improve natural language processing for text query information retrieval. The proposed EQUIRS system is compared to previous fuzzy clustering methods. Experimental results on a dataset of 900 files across 5 categories show that EQUIRS has higher accuracy than fuzzy clustering, as measured by precision, recall, F-measure, though it has longer training and searching times. The document concludes that EQUIRS is an effective approach for information retrieval based on HMM.
FPGA Implementation of Viterbi Decoder using Hybrid Trace Back and Register E...ijsrd.com
Error correction is an integral part of any communication system and for this purpose, the convolution codes are widely used as forward error correction codes. For decoding of convolution codes, at the receiver end Viterbi Decoder is being employed. The parameters of Viterbi algorithm can be changed to suit a specific application. The high speed and small area are two important design parameters in today’s wireless technology. In this paper, a high speed feed forward viterbi decoder has been designed using hybrid track back and register exchange architecture and embedded BRAM of target FPGA. The proposed viterbi decoder has been designed with Matlab, simulated with Xilinx ISE 8.1i Tool, synthesized with Xilinx Synthesis Tool (XST), and implemented on Xilinx Virtex4 based xc4vlx15 FPGA device. The results show that the proposed design can operate at an estimated frequency of 107.7 MHz by consuming considerably less resources on target device to provide cost effective solution for wireless applications.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Research of 64-bits RISC Dual-core Microprocessor with High Performance and L...TELKOMNIKA JOURNAL
A 64-bits RISC Dual-Core microprocessor with high performance and low power consumption is
presented in this paper. The processor has a symmetric architecture with two cores. Each of them has
three stage pipeline, 64-bit data-path and 64-bit address port. A novel shared register module, redundant
Booth3 algorithm and leapfrog Wallace tree architecture are introduced to the microprocessor, and both
the performance and power consumption of it has been improved enormously. As the FPGA simulation
result indicates, the power consumption is decreased by 14% and the longest data-path is shortened by
25%.
A minimization approach for two level logic synthesis using constrained depth...IAEME Publication
This document summarizes a research paper that proposes a new approach for two-level logic synthesis called constrained depth-first search. It generates all prime implicants of a Boolean function efficiently using cube absorption and a position cube notation representation. It then formulates the problem of finding an optimal sum-of-products cover as a combinatorial optimization problem. The key contribution is a constrained depth-first search of a virtual prime implicant decision tree that finds the minimum cover. Numerical results on benchmark functions show it produces results as good or better than state-of-the-art logic synthesis tools, often finding the greedy solution is already optimal or near-optimal.
This document summarizes a research paper that proposes a new framework called FinnMun for emulating spreadsheets. The paper introduces FinnMun and describes its implementation. It then discusses the experimental setup and results from evaluating FinnMun on various hardware configurations. The evaluation analyzes trends in metrics like throughput, response time, and hit ratio. The paper finds that FinnMun can successfully emulate spreadsheets and improve system performance. It concludes that FinnMun helps advance research on producer-consumer problems and complex systems.
The performance of speaker identification systems
decreases significantly under noisy conditions and especially
when there is a difference between the recognition and the
learning sessions. To improve robustness, we have proposed in
the previous study an auditory features and a robust speaker
recognition system using a front-end based on the combination of
MFCC and RASTA-PLP methods. In this paper, we further
study the auditory features by exploring the combination of
GFCC and RASTA-PLP. We find that the method performs
substantially better than all previous studied methods.
Furthermore, our current identification system achieves
significant performance improvements of 5.92% in a wide range
of signal-to-noise conditions compared with the last studied
front-end based on MFCC combined to RASTA-PLP.
Experimental results show an average accuracy improvement of
10.11% in case of GFCC combined with RASTA-PLP over the
base line MFCC thechnique across various SNR. This fusion
approach allow a highly and appreciable enhancement in the
higher noisy conditions.
DUAL FIELD DUAL CORE SECURE CRYPTOPROCESSOR ON FPGA PLATFORMVLSICS Design
This paper is devoted to the design of dual core crypto processor for executing both Prime field and binary field instructions. The proposed design is specifically optimized for Field programmable gate array (FPGA) platform. Combination of two different field (prime field GF(p) and Binary field GF(2m)) instructions execution is analysed.The design is implemented in Spartan 3E and virtex5. Both the performance results are compared. The implementation result shows the execution of parallelism using dual field instructions
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
This document presents an optimal general type-2 fuzzy controller (OGT2FC) for controlling traffic signal scheduling and phase succession to minimize wait times and average queue length. The OGT2FC uses a combination of general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) to optimize the membership function parameters. Simulation results show the OGT2FC performs better than conventional type-1 fuzzy controllers in regulating urban traffic flow.
An Application of Pattern matching for Motif IdentificationCSCJournals
Pattern matching is one of the central and most widely studied problem in theoretical computer science. Solutions to the problem play an important role in many areas of science and information processing. Its performance has great impact on many applications including database query, text processing and DNA sequence analysis. In general Pattern matching algorithms are based on the shift value, the direction of the sliding window and the order in which comparisons are made. The performance of the algorithms can be enhanced to a great extent by a larger shift value and less number of comparison to get the shift value. In this paper we proposed an algorithm, for finding motif in DNA sequence. The algorithm is based on preprocessing of the pattern string(motif) by considering four consecutive nucleotides of the DNA that immediately follow the aligned pattern window in an event of mismatch between pattern(motif) and DNA sequence .Theoretically, we found the proposed algorithms work efficiently for motif identification.
PERFORMANCE EVALUATIONS OF GRIORYAN FFT AND COOLEY-TUKEY FFT ONTO XILINX VIRT...cscpconf
A large family of signal processing techniques consist of Fourier-transforming a signal,manipulating the Fourier-transformed data in a simple way, and reversing the transformation.We widely use Fourier frequency analysis in equalization of audio recordings, X-ray crystallography, artefact removal in Neurological signal and image processing, Voice Activity Detection in Brain stem speech evoked potentials, speech processing spectrograms are used to identify phonetic sounds and so on. Discrete Fourier Transform (DFT) is a principal mathematical method for the frequency analysis. The way of splitting the DFT gives out various fast algorithms. In this paper, we present the implementation of two fast algorithms for the DFT for evaluating their performance. One of them is the popular radix-2 Cooley-Tukey fast Fourier transform algorithm (FFT) [1] and the other one is the Grigoryan FFT based on the splitting by the paired transform [2]. We evaluate the performance of these algorithms by implementing
them on the Xilinx Virtex-II pro [3] and Virtex-5 [4] FPGAs, by developing our own FFT processor architectures. Finally we show that the Grigoryan FFT is working fatser than
Cooley-Tukey FFT, consequently it is useful for higher sampling rates. Operating at higher
sampling rates is a challenge in DSP applications
Performance evaluations of grioryan fft and cooley tukey fft onto xilinx virt...csandit
A large family of signal processing techniques consist of Fourier-transforming a signal,
manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
We widely use Fourier frequency analysis in equalization of audio recordings, X-ray
crystallography, artefact removal in Neurological signal and image processing, Voice Activity
Detection in Brain stem speech evoked potentials, speech processing spectrograms are used to
identify phonetic sounds and so on. Discrete Fourier Transform (DFT) is a principal
mathematical method for the frequency analysis. The way of splitting the DFT gives out various
fast algorithms. In this paper, we present the implementation of two fast algorithms for the DFT
for evaluating their performance. One of them is the popular radix-2 Cooley-Tukey fast Fourier
transform algorithm (FFT) [1] and the other one is the Grigoryan FFT based on the splitting by
the paired transform [2]. We evaluate the performance of these algorithms by implementing
them on the Xilinx Virtex-II pro [3] and Virtex-5 [4] FPGAs, by developing our own FFT
processor architectures. Finally we show that the Grigoryan FFT is working fatser than
Cooley-Tukey FFT, consequently it is useful for higher sampling rates. Operating at higher
sampling rates is a challenge in DSP applications.
Genomic repeats detection using Boyer-Moore algorithm on Apache Spark Streaming TELKOMNIKA JOURNAL
Genomic repeats, i.e., pattern searching in the string processing process to find
repeated base pairs in the order of deoxyribonucleic acid (DNA), requires
a long processing time. This research builds a big-data computational
model to look for patterns in strings by modifying and implementing
the Boyer-Moore algorithm on Apache Spark Streaming for human DNA
sequences from the ensemble site. Moreover, we perform some experiments
on cloud computing by varying different specifications of computer clusters
with involving datasets of human DNA sequences. The results obtained show
that the proposed computational model on Apache Spark Streaming is faster
than standalone computing and parallel computing with multicore. Therefore,
it can be stated that the main contribution in this research, which is to develop
a computational model for reducing the computational costs, has been
achieved.
Instruction level parallelism using ppm branch predictionIAEME Publication
This document summarizes an approach to instruction level parallelism using prediction by partial matching (PPM) branch prediction. It proposes a hybrid PPM-based branch predictor that uses both local and global branch histories. The two predictors are combined using a neural network. Key aspects of the implementation include:
1. Using local and global history PPM predictors and combining their predictions with a neural network.
2. Enhancements to the basic PPM approach like program counter tagging, efficient history encoding using run-length encoding, tracking pattern bias, and dynamic pattern length selection.
3. Details of the global history PPM predictor including the use of tables and linked lists to store patterns of different lengths and handle collisions
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
A Novel Design For Generating Dynamic Length Message Digest To Ensure Integri...IRJET Journal
This document proposes a novel algorithm for generating dynamic length message digests to ensure data integrity. It summarizes existing hash algorithms like MD5 and SHA-1 that generate fixed-length digests but have been proven breakable or inefficient. The proposed algorithm generates variable-length digests using a block cipher approach on message chunks and pseudo-random numbers, making it more robust against attacks. Implementation results show the proposed algorithm has better timing performance and avalanche effect than MD5, SHA-1 and a modified MD5 algorithm, demonstrating its efficiency and security for ensuring integrity.
Comprehensive Performance Evaluation on Multiplication of Matrices using MPIijtsrd
In Matrix multiplication we refer to a concept that is used in technology applications such as digital image processing, digital signal processing and graph problem solving. Multiplication of huge matrices requires a lot of computing time as its complexity is O n3 . Because most engineering science applications require higher computational throughput with minimum time, many sequential and analogue algorithms are developed. In this paper, methods of matrix multiplication are elect, implemented, and analyzed. A performance analysis is evaluated, and some recommendations are given when using open MP and MPI methods of parallel of latitude computing. Adamu Abubakar I | Oyku A | Mehmet K | Amina M. Tako ""Comprehensive Performance Evaluation on Multiplication of Matrices using MPI""
Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30015.pdf
Paper Url : https://www.ijtsrd.com/engineering/electrical-engineering/30015/comprehensive-performance-evaluation-on-multiplication-of-matrices-using-mpi/adamu-abubakar-i
Joint Timing and Frequency Synchronization in OFDMidescitation
1) The document discusses synchronization challenges in OFDM systems, including packet detection, timing synchronization, and frequency offset calculation. It analyzes various synchronization techniques like auto-correlation difference, auto-correlation sum, and cross-correlation methods for packet detection and timing synchronization.
2) For frequency offset calculation, it describes data-aided and non-data aided algorithms, including the van De Beek algorithm which relies on cyclic prefix redundancy rather than pilot symbols.
3) The key synchronization challenges are maintaining orthogonality between subcarriers in the presence of frequency offsets introduced by the wireless channel, which can significantly degrade performance if not corrected. Accurate synchronization algorithms are important for OFDM receivers to function properly.
ANALOG MODELING OF RECURSIVE ESTIMATOR DESIGN WITH FILTER DESIGN MODELVLSICS Design
This document summarizes a research paper on implementing a low power design methodology for recursive encoders and decoders. It discusses how recursive coding can achieve better error correction performance at low signal-to-noise ratios compared to other codes. It then describes the design of a recursive decoder that uses the log-MAP algorithm to minimize power consumption. The decoder uses five main computational steps - branch metric calculation, forward metric computation, backward metric computation, log-likelihood ratio calculation, and extrinsic information calculation. It also compares the implementation of four-state and eight-state recursive encoders. The goal of the design is to optimize the power and area of recursive encoders and decoders.
In this paper, we apply grammar-based pre-processing prior to using the Prediction by Partial Matching
(PPM) compression algorithm. This achieves significantly better compression for different natural
language texts compared to other well-known compression methods. Our method first generates a grammar
based on the most common two-character sequences (bigraphs) or three-character sequences (trigraphs) in
the text being compressed and then substitutes these sequences using the respective non-terminal symbols
defined by the grammar in a pre-processing phase prior to the compression. This leads to significantly
improved results in compression for various natural languages (a 5% improvement for American English,
10% for British English, 29% for Welsh, 10% for Arabic, 3% for Persian and 35% for Chinese). We
describe further improvements using a two pass scheme where the grammar-based pre-processing is
applied again in a second pass through the text. We then apply the algorithms to the files in the Calgary
Corpus and also achieve significantly improved results in compression, between 11% and 20%, when
compared with other compression algorithms, including a grammar-based approach, the Sequitur
algorithm.
In this paper, we apply grammar-based pre-processing prior to using the Prediction by Partial Matching
(PPM) compression algorithm. This achieves significantly better compression for different natural
language texts compared to other well-known compression methods. Our method first generates a grammar
based on the most common two-character sequences (bigraphs) or three-character sequences (trigraphs) in
the text being compressed and then substitutes these sequences using the respective non-terminal symbols
defined by the grammar in a pre-processing phase prior to the compression. This leads to significantly
improved results in compression for various natural languages (a 5% improvement for American English,
10% for British English, 29% for Welsh, 10% for Arabic, 3% for Persian and 35% for Chinese). We
describe further improvements using a two pass scheme where the grammar-based pre-processing is
applied again in a second pass through the text. We then apply the algorithms to the files in the Calgary
Corpus and also achieve significantly improved results in compression, between 11% and 20%, when
compared with other compression algorithms, including a grammar-based approach, the Sequitur
algorithm.
In this paper, we apply grammar-based pre-processing prior to using the Prediction by Partial Matching (PPM) compression algorithm. This achieves significantly better compression for different natural language texts compared to other well-known compression methods. Our method first generates a grammar based on the most common two-character sequences (bigraphs) or three-character sequences (trigraphs) in the text being compressed and then substitutes these sequences using the respective non-terminal symbols defined by the grammar in a pre-processing phase prior to the compression. This leads to significantly improved results in compression for various natural languages (a 5% improvement for American English, 10% for British English, 29% for Welsh, 10% for Arabic, 3% for Persian and 35% for Chinese). We describe further improvements using a two pass scheme where the grammar-based pre-processing is applied again in a second pass through the text. We then apply the algorithms to the files in the Calgary Corpus and also achieve significantly improved results in compression, between 11% and 20%, when compared with other compression algorithms, including a grammar-based approach, the Sequitur algorithm
The evaluation performance of letter-based technique on text steganography sy...journalBEEI
Steganography is a part of information hiding in covering the hidden message in any medium such as text, image, audio, video and others. This paper concerns about the implementation of steganography in text domain called text steganography. It intends to concentrate on letter-based technique as one of the representative techniques in text steganography. This paper displays some techniques of letter-based that is integrated in one system technique displayeed in a logical and physical design. The integrated system is evaluated using some parameter that is used in order to discover the performance in term of capacity after embedding process and the time consuming in the development process. This paper is anticipated to contribute in describing the implementation of the techniques in one system and to display the performance some parameter evaluation.
Problems in Task Scheduling in Multiprocessor Systemijtsrd
This Contemporary computer systems are multiprocessor or multicomputer machines. Their efficiency depends on good methods of administering the executed works. Fast processing of a parallel application is possible only when its parts are appropriately ordered in time and space. This calls for efficient scheduling policies in parallel computer systems. In this work deterministic problems of scheduling are considered. The classical scheduling theory assumed that the application in any moment of time is executed by only one processor. This assumption has been weakened recently, especially in the context of parallel and distributed computer systems. This monograph is devoted to problems of deterministic scheduling applications (or tasks according to the scheduling terminology) requiring more than one processor simultaneously. We name such applications multiprocessor tasks. In this work the complexity of open multiprocessor task scheduling problems has been established. Algorithms for scheduling multiprocessor tasks on parallel and dedicated processors are proposed. For a special case of applications with regular structure which allow for dividing it into parts of arbitrary size processed independently in parallel, a method of finding optimal scattering of work in a distributed computer system is proposed. The applications with such regular characteristics are called divisible tasks. The concept of a divisible task enables creation of tractable computation models in a wide class of computer architectures such as chains, stars, meshes, hypercubes, multistage networks. Divisible task method gives rise to the evaluation of computer system performance. Examples of such performance evaluation are presented. This work summarizes earlier works of the author as well as contains new original results. Mukul Varshney | Jyotsna | Abhakiran Rajpoot | Shivani Garg"Problems in Task Scheduling in Multiprocessor System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-4 , June 2017, URL: http://www.ijtsrd.com/papers/ijtsrd2198.pdf http://www.ijtsrd.com/computer-science/computer-architecture/2198/problems-in-task-scheduling-in-multiprocessor-system/mukul-varshney
1. Data structures organize data in memory for efficient access and processing. They represent relationships between data values through placement and linking of the values.
2. Algorithms are finite sets of instructions that take inputs, produce outputs, and terminate after a finite number of unambiguous steps. Common data structures and algorithms are analyzed based on their time and space complexity.
3. Data structures can be linear, with sequential elements, or non-linear, with branching elements. Abstract data types define operations on values independently of implementation through inheritance and polymorphism.
AN ADAPTIVE PSEUDORANDOM STEGO-CRYPTO TECHNIQUE FOR DATA COMMUNICATIONIJCNCJournal
The document describes a proposed adaptive pseudorandom stego-crypto technique for data communication. The technique combines stream cipher cryptography with a modified pseudorandom LSB substitution technique. This provides an evenly distributed cipher text while also enhancing security through increased brute force search times and reduced time complexity by avoiding collisions during random pixel selection. The proposed method uses three parameters that are optimized through experimental analysis to minimize distortions, increase cipher text scattering, and reduce collisions and time complexity. Results demonstrate the technique maintains good perceptual quality while improving upon previous methods.
Combining text and pattern preprocessing in an adaptive dna pattern matcherIAEME Publication
This paper presents an adaptive DNA pattern matching algorithm that combines both text and pattern preprocessing to efficiently detect patterns in DNA sequences. It initializes a pattern preprocessor similar to KMP and builds a suffix tree for the text. It then marks regions in the text and finds the maximum overlap between each region and the pattern. The algorithm searches the region with highest overlap first before moving to other regions. Experiments show it performs faster than KMP, with running time of O(m) where m is the pattern length.
Compressive Sensing in Speech from LPC using Gradient Projection for Sparse R...IJERA Editor
This paper presents compressive sensing technique used for speech reconstruction using linear predictive coding because the
speech is more sparse in LPC. DCT of a speech is taken and the DCT points of sparse speech are thrown away arbitrarily.
This is achieved by making some point in DCT domain to be zero by multiplying with mask functions. From the incomplete
points in DCT domain, the original speech is reconstructed using compressive sensing and the tool used is Gradient
Projection for Sparse Reconstruction. The performance of the result is compared with direct IDCT subjectively. The
experiment is done and it is observed that the performance is better for compressive sensing than the DCT.
This document summarizes a paper that presents two parallel algorithms for batched range searching on a mesh-connected SIMD computer. The first algorithm is average-case efficient and based on cell division. The second algorithm is worst-case efficient and uses a divide-and-conquer approach. Both algorithms were implemented and experimentally evaluated on a MasPar MP-1 computer. The paper also provides background on mesh-connected SIMD computers and describes common operations like sorting that are used in the algorithms.
Fpga based efficient multiplier for image processing applications using recur...VLSICS Design
The Digital Image processing applications like medical imaging, satellite imaging, Biometric trait images
etc., rely on multipliers to improve the quality of image. However, existing multiplication techniques
introduce errors in the output with consumption of more time, hence error free high speed multipliers has
to be designed. In this paper we propose FPGA based Recursive Error Free Mitchell Log Multiplier
(REFMLM) for image Filters. The 2x2 error free Mitchell log multiplier is designed with zero error by
introducing error correction term is used in higher order Karastuba-Ofman Multiplier (KOM)
Architectures. The higher order KOM multipliers is decomposed into number of lower order multipliers
using radix 2 till basic multiplier block of order 2x2 which is designed by error free Mitchell log multiplier.
The 8x8 REFMLM is tested for Gaussian filter to remove noise in fingerprint image. The Multiplier is
synthesized using Spartan 3 FPGA family device XC3S1500-5fg320. It is observed that the performance
parameters such as area utilization, speed, error and PSNR are better in the case of proposed architecture
compared to existing architectures.
Emulation OF 3gpp Scme CHANNEL MODELS USING A Reverberation Chamber MEASUREME...IJERA Editor
1) A test bed was developed to validate 3GPP SCME channel models using a reverberation chamber. Power delay profiles were measured for urban micro and macro channel models and matched well with theoretical profiles.
2) The reverberation chamber was able to control delay spread by adding absorbing materials, allowing different channel models to be emulated. Measurements showed Rayleigh fading was maintained with losses.
3) Convolution of signals with 3GPP channel model taps allowed emulation of multi-cluster channels. Measurements found emulated profiles matched theoretical profiles specified in 3GPP standards.
- The document discusses compilation analysis and performance analysis of Feel++ scientific applications using Scalasca.
- It presents compilation analysis of Feel++ using examples of mesh manipulation and discusses performance analysis using Feel++'s TIME class or Scalasca instrumentation.
- The document analyzes the laplacian case study in Feel++ using different compilation options and polynomial dimensions and presents results from performance analysis with Scalasca.
Similar to Parallel random projection using R high performance computing for planted motif search (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
2. TELKOMNIKA ISSN: 1693-6930
Parallel random projection using R high performance... (Lala Septem Riza)
1353
Nowadays, there are many algorithms, collected in software libraries/packages, that have been
implemented and saved in the Comprehensive R Archive Network (CRAN) at
https://cran.r-project.org/. In this repository, one of packages in R used for high performance
computing and big data analysis is pbdMPI [11] that is used in this research.
In the literature, we found some relevant articles discussing implementations of motif
discovery in parallel computing. For example, in Clemente & Adorna's study [12], random
Projection algorithm was developed in the concept of GPU (Graphic Processing Units). Each
processor will be directed into threads that work within the device or GPU. Meanwhile, the
sequential process will be executed on the host or CPU. TEIRESIAS has been introduced to
improve the speed on finding maximal pattern [13]. An enhancement of the PMSPRUNE
algorithm has been proposed with two additional features: neighbor generation on a demand
basis and omitting the duplicate neighbor checking [14]. Furthermore, there are some different
approaches for dealing with patterns matching in various fields. For instance, multiple patterns
matching methods was introduced for large multi-pattern matching [15]. Improving the scanning
mode of Square Non-symmetry and Antipacking Model (SNAM) for binary-image is obtained by
proposing the new neighbor-finding algorithm [16].
The rest of the paper is organized as follows: first, the global procedure of this research
is presented in section 2. In section 3, a main contribution, which is a modification and
implementation of parallel random projection by using the pbdMPI package, is discussed.
To validate and analyze the proposel computational model, we conduct some experiments in
section 4 and some analysis in section 5. Finally, we conclude the research in section 6.
2. Research Method
Figure 1 shows the research design done in this study. It can be seen that first, we
perform some preparation, such as identifying problems, research objectives, and literature
study. These activities have been presented in the previous section. Then, we present a main
contribution of this research, which is designing and implementing parallel random projection
with R high performance computing (i.e., the pbdMPI package). This part will be explained in the
next section. After that, we conduct some experiments and their analysis of the results. Drawing
some conclusión is presented in the end.
Figure 1. Research design to conduct parallel random projection
3. Parallel Random Projection with the pbdMPI package
Basically, the computational model proposed in this research can be seen in Figure 2.
First, after reading and converting the input data from the .falsa file, we perform a modification
of random projection by utilizing R high perfomance computing (i.e., the pbdMPI package),
3. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 3, June 2019: 1352-1359
1354
called parallel random projection with pbdMPI. Detailed explanation regarding the proposed
approach can be seen in Figure 3. The results of this model is all motifs, their starting indices,
and computational costs.
Figure 2. The computational model of parallel random projection with pbdMPI
According to Figure 3, it can be seen that besides supplying some parameters related to
the RP algorithm, we need to input the number of cores and batches. Since the R programming
language needs to load data into random access memory (RAM), we need to define the number
of batches so that each batch just takes less than 20% of total memory capacity. Furthermore,
actually Step 1 to 3 and Step 6 to 8 illustrated in Figure 3 are the same as the RP algorithm on
the standalone mode. However, from Step 4 to 5 the tasks are conducted in parallel computing
by using pbdMPI commands. An important part of these steps is a rule to divide the sequence
into numbers of batches. Moreover, the rule should prevent all possible motif including the
sequence even though it has been splitted into several batches. So, in this case we implement
the (1) and (2):
* 1 2 ,i
s
L
index floor i l
b
(1)
* ,i
e
L
index floor i
b
(2)
4. TELKOMNIKA ISSN: 1693-6930
Parallel random projection using R high performance... (Lala Septem Riza)
1355
where
i
sindex and
i
eindex are starting and ending indices for cutting the batch of i . , ,L b and l
are the length of sequence, number of batches, and length of pattern, respectively. It should be
noted that the starting index starts from i =2. For example, it is give the sequence
S=CAGTGACGTAATCA, and the length of pattern is 3. So, according to (1) and (2), we obtain
the following batches: S1=CAGT; S2=GTGACG; and S3=CGTAATCA. By following how the
algorithm random projection generates k-mers, we obtain the following k-mers on all batches
that are the same as k-mers on the sequence (without splitting into batches): CAG; AGT;GTG;
TGA; GAC; ACG; CGT; GTA; TAA; AAT; ATC; and TCA. It means that even though the
sequence has been splited and processed by different cores, the results of RP and parallel
random projection are the same.
Figure 3. The pseudo code of parallel random projection with pbdMPI
4. Experimental Study
4.1. Data Gathering
The data used in this study obtained from research in [17]. To download the data can
be through the site of University of Washington Computer Science and Engineering on page
http://bio.cs.washington.edu/research/download. In total, there are 52 data sets of DNA
sequences derived from four species, 6 of which are derived from the Drosophila melanogaster
sequence, 26 data derived from human sequences, 12 data derived from rat sequences and 8
other data derived from the Saccharomyces cerevisiae sequence. In each data file there are
several sequences that number between 1 to 35 sequences. Then, every sequence that resides
on the file has a variable length ranging from 500 to 3000 base pairs.
In this case, we only consider to use four datasets as follows: the dm01r.fasta and
dm05r.fasta files that are DNA sequences of Drosophila melanogaster, then hm01r.fasta
derived from the human sequence, and muso4r.fasta which is the rat DNA sequence as the
input data. The dm01r.fasta file contains 4 DNA sequences with the total length of sequence is
6000, while the dm05r.fasta file consists of 3 DNA sequence with the length of 7500.
The hm01r.fasta and mus04r.fasta files have the DNA sequence length of 36000 and
7000, respectively.
4.2. Experimental Design
In this study we conduct two simulations: standalone and parallel computing
(i.e., multicore) modes. Each group will use all data as mentioned previously: dm01r.fasta,
dm05r.fasta, hm01r.fasta, and muso4r.fasta. Furthermore, in accordance with the algorithm,
some parameters should be assigned, as follows: the length of motif and mismatches (l, d),
5. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 3, June 2019: 1352-1359
1356
threshold values (θ), and number of repetitions in each simulation (m). Specific variables (i.e.,
number of batches (b) and cores (c)) on parallel computing with pbdMPI are also assigned.
Total experiments conducted for both scenarios are 1560 experiments (i.e., 120 times for
standalone and 1440 times for parallel computing) by assigning all possibilities of the
combinations of the following parameter values: (l, d)={(6,2), (7,2), (8,3)}, θ={3,4}, m={1,2,3,4,5},
b={10,50}, and c={1,2,…,6}.
5. Results and Analysis
Since the limited space, in this section we illustrate the results and their analysis for a
particular dataset only. For example, on the standalone mode, a comparison of the number of
motifs found according to m, θ, and (l, d) on the dm01r dataset is shown in Figure 4. It can be
seen that the higher numbers of mismatch makes the higher of numbers of motifs.
Figure 4. The comparison of the numbers of motifs found on the dm01r dataset
Furthermore, on the standalone mode, we can compare the computational cost with
length of DNA sequence on the different (l, d) and θ as shown in Figure 5. It is obvious that the
longer length of DNA sequence takes the higher computation cost. It should be noted that these
lengths also represent the datasets used in the experiments, such as the dm01r has the length
of 6000.
Figure 5. The comparison between the computational cost and datasets/length of datasets
6. TELKOMNIKA ISSN: 1693-6930
Parallel random projection using R high performance... (Lala Septem Riza)
1357
On the parallel computing mode, Figure 6 shows that the comparison between the
computational costs and numbers of cores when we used the dm01r dataset on (l, d)=(6.2),
θ=3, and b=10. It can be seen that the proposed model has been successful since in general
speaking the computation cost can be reduced by adding the number of cores.
Figure 6. The comparison between computational cost and number of core on the
dm01r dataset
To ensure the analysis, Figure 7 explains a comparison between computational cost
and number of core on different (l, d) and m and the same θ (i.e., 3), and b (i.e., 10). It can be
seen that the computational time with stand alone mode (i.e., c=1) at (l, d)=(8.3) with m=5 took
26.98 seconds while on the number of core of 2 the computation only took 6.3 seconds.
It means that the computational time on stand alone needs four times longer than using 2 cores.
Moreover, the standalone mode took more than ten times compared with parallel computing
using 3 cores (i.e., 2.52 seconds). Using 6 cores, the computation can be faster around
34 times compared with the standalone mode. So, now it is obvious that the proposed model is
much faster than the standalone mode.
Figure 7. The comparison on dm01r with θ=3 and b=10
We also compared computational time gained from experimental results on the previous
research [1] even though there are different data on the file dm01r and mus04r. The number of
DNA sequences contained in the file dm01r is 4 with the length of 1500 for each sequence while
in the research [1] the dataset contains 5 DNA sequences. In the file mus04r the number of
7. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 3, June 2019: 1352-1359
1358
DNA sequences used in this experiment is 7 sequences with the length of each sequence is
1000 while only 6 sequences were used by the previous research. The comparison can be seen
in Table 1. It can be seen that all experiments conducted in this research are faster than the
study in [1]. It should be noted that the research conducted by [1] was performed in
standalone mode.
Table 1. Comparison with the Other Research [1]
File
Number of
sequence
Length of each
sequence
(l,d)
Time in
[1] (s)
Time using 6 cores in
this research (s)
dm01r 4 1500
(6,2) 1.81 0.32
(7,2) 5.804 0.45
(8,3) 39.55 0.42
dm05r 3 2500
(6,2) 2.574 0.48
(7,2) 7.268 0.51
(8,3) 52.537 0.52
hm01r 18 2000
(6,2) 2.73 1.75
(7,2) 7.708 1.65
(8,3) 50.135 1.71
mus04r 7 1000
(6,2) 1.4 0.52
(7,2) 5.08 0.5
(8,3) 35.45 0.52
6. Conclusion
The main contributions of this research are as follows (i) to propose the computational
model for modifying the random projection algorithm, called parallel random projection, for
dealing with planted motif search by utilizing R high performance computing (i.e., the pbdMPI
package) and (ii) to implement the proposed model and then validate it for finding motifs on
DNA sequences. According to the experiments, we can state that the proposed model are able
to reduce the computational cost significantly. Moreover, a comparison with the previous study
has been done, and it shown that the proposal produced better results in the term of
computational cost.
In the future, we have a plan to improve the model by using Big Data platform, such as
by using the programming model of MapReduce on Apache Hadoop [18] and Resilient
Distributed Datasets on Apache Spark [19]. Moreover, the different tools for utilizing parallel
computing, e.g., the foreach package [20], can be used as the study in [21]. Different tasks in
the related research to bioinformatics can be applied to test the proposed model as well, such
as prediction on cáncer [22], kidney disease [23], and sleep disorder [24]. Additionally, another
method that can be implemented for dealing with this research is Knuth Morris Pratt [25, 26].
References
[1] Ashraf F Bin, Abir AI, Salekin MS, et al. RPPMD (Randomly projected possible motif discovery): An
efficient bucketing method for finding DNA planted Motif. In: ECCE 2017 - International Conference
on Electrical, Computer and Communication Engineering. 2017: 509–513.
[2] Reece JB, Urry LA, Cain ML, et al. Campbell Biology, 10th. Pearson: Boston, MA, 2014.
[3] Aluru S. Handbook of computational molecular biology. Chapman and Hall/CRC, 2005.
[4] Rajasekaran S, Balla S, Huang C-H, et al. High-performance exact algorithms for motif search.
Journal of clinical monitoring and computing. 2005; 19(4–5): 319–328.
[5] Pal S, Rajasekaran S. Improved algorithms for finding edit distance based motifs. In: Bioinformatics
and Biomedicine (BIBM), 2015 IEEE International Conference on. 2015: 537–542.
[6] Martinez HM. An efficient method for finding repeats in molecular sequences. Nucleic acids research.
1983; 11(13): 4629–4634.
[7] Nicolae M, Rajasekaran S. Efficient sequential and parallel algorithms for planted motif search. BMC
bioinformatics. 2014; 15(1): 34.
[8] Buhler J, Tompa M. Finding motifs using random projections. Journal of computational biology. 2002;
9(2): 225–242.
[9] Jones NC, Pevzner PA, Pevzner PA. An introduction to bioinformatics algorithms. MIT press, 2004.
[10] Ihaka R, Gentleman R. R: a language for data analysis and graphics. Journal of computational and
graphical statistics. 1996; 5(3): 299–314.
[11] Ostrouchov G, Chen W-C, Schmidt D, et al. Programming with big data in R. URL http://r-pbd org.
8. TELKOMNIKA ISSN: 1693-6930
Parallel random projection using R high performance... (Lala Septem Riza)
1359
[12] Clemente JB, Adorna HN. Some Improvements of Parallel Random Projection for Finding Planted
(l, d)-Motifs. In: Theory and Practice of Computation. Springer, 2013: 64–81.
[13] Hussein AM, Rashid NA, Abdulah R. Parallelisation of maximal patterns finding algorithm in biological
sequences. In: Computer and Information Sciences (ICCOINS), 2016 3rd
International Conference
on. 2016: 227–232.
[14] Mohanty S, Sahoo B, Balouki A, et al. Enhanced Planted (ℓ, D) Motif Search Prune Algorithm for
Parallel Environment. Journal of Theoretical and Applied Information Technology. 2016; 89(2):
296–306.
[15] Jun L, Zhuo Z, Juan M, et al. Multi-pattern Matching Methods Based on Numerical Computation.
Indonesian Journal of Electrical Engineering and Computer Science. 2013; 11(3): 1497–1505.
[16] He J, Guo H, Hu D. A Neighbor-finding Algorithm Involving the Application of SNAM in Binary-Image
Representation. TELKOMNIKA (Telecommunication Computing Electronics and Control). 2015;
13(4): 1319–1329.
[17] Tompa M, Li N, Bailey TL, et al. Assessing computational tools for the discovery of transcription
factor binding sites. Nature biotechnology. 2005; 23(1): 137.
[18] White T. Hadoop: The definitive guide. ‘ O’Reilly Media, Inc.’, 2012.
[19] Zaharia M, Xin RS, Wendell P, et al. Apache spark: a unified engine for big data processing.
Communications of the ACM. 2016; 59(11): 56–65.
[20] Analytics R, Weston S. foreach: Foreach looping construct for R. R package version. 2013;
1(1): 2013.
[21] Riza LS, Utama JA, Putra SM, et al. Parallel Exponential Smoothing Using the Bootstrap Method in R
for Forecasting Asteroid’s Orbital Elements. Pertanika Journal of Science & Technology. 2018; 26(1):
441–462.
[22] Michiels S, Koscielny S, Hill C. Prediction of cancer outcome with microarrays: a multiple random
validation strategy. The Lancet. 2005; 365(9458): 488–492.
[23] Alasker H, Alharkan S, Alharkan W, et al. Detection of kidney disease using various intelligent
classifiers. In: Science in Information Technology (ICSITech), 2017 3rd
International Conference on.
Bandung, 2017: 681–684.
[24] Riza LS, Pradini M, Rahman EF, et al. An Expert System for Diagnosis of Sleep Disorder Using
Fuzzy Rule-Based Classification Systems. In: IOP Conference Series: Materials Science and
Engineering. Bandung, Indonesia, 2017: 12011.
[25] Knuth DE, Morris Jr JH, Pratt VR. Fast pattern matching in strings. SIAM journal on computing. 1977;
6(2): 323–350.
[26] Riza LS, Firmansyah MI, Siregar H, et al. Determining Strategies on Playing Badminton using the
Knuth-Morris-Pratt Algorithm. TELKOMNIKA Telecommunication Computing Electronics and Control.
2018; 16(6):2763-2770.