Abstract
Digital signal processing, Digital control systems, Telecommunication, Audio and Video processing are important applications in
VLSI. Design and implementation of DSP systems with advances in VLSI demands low power, efficiency in energy, portability,
reliability and miniaturization. In digital signal processing, linear-time invariant systems are important sub-class of systems and are
the heart and soul of DSP.
In many application areas, linear and circular convolution are fundamental computations. Convolution with very long sequences is
often required. Discrete linear convolution of two finite-length and infinite length sequences using circular convolution on for
Overlap-Add and Overlap-Save methods can be computed. In real-time signal processing, circular convolution is much more
effective than linear convolution. Circular convolution is simpler to compute and produces less output samples compared to linear
convolution. Also linear convolution can be computed from circular convolution. In this paper, both linear, circular convolutions are
performed using vedic multiplier architecture based on vertical and cross wise algorithm of Urdhva-Tiryabhyam. The implementation
uses hierarchical design approach which leads to improvement in computational speed, power reduction, minimization in hardware
resources and area. Coding is done using Verilog HDL. Simulation and synthesis are performed using Xilinx FPGA.
Keywords: Linear and Circular convolution, Urdhva - Tiryagbhyam, carry save multiplier, Overlap –Add/ Save Verilog
HDL.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This paper proposes modeling and identification of dynamical systems in delta
domain using neural network. The properties of delta operator are used such as greater
numerical robustness in computation and superior coefficients representation in finite word
length in implementation and well ensured numerical conditioning at high sampling
frequency. To formulate the identification scheme delta operator model is recasted into a
realizable neural network structure using the properties of inverse delta operator.
Digital Signal Processing[ECEG-3171]-Ch1_L03Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced.
#Africa#Ethiopia
Digital Signal Processing[ECEG-3171]-Ch1_L02Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced
#Africa#Ethiopia
This document provides an introduction and overview of algorithms and analysis of algorithms. It begins with definitions of algorithms and what constitutes an algorithm. It then discusses analyzing algorithms to determine their efficiency, including analyzing worst-case, average-case, and best-case scenarios. It introduces asymptotic notation used to describe algorithm running times at different scales, including big-O notation. It also discusses computational models and analyzing algorithms in terms of counting the basic operations they perform. The document is presented as a lecture on algorithms with definitions, examples, and outlines of key topics to cover.
Digital Signal Processing[ECEG-3171]-Ch1_L07Rediet Moges
This document is a lecture on discrete-time convolutions. It discusses various methods for processing discrete-time samples, including block processing and sample processing. Block processing deals with finite blocks of data and applications like FIR filtering and DFT computations. Sample processing is used in real-time applications like filtering and control systems. The document presents different equivalent forms of the convolution operation, including direct form, LTI form, matrix form, and overlap-add. It provides examples of computing convolution using these different forms. The document concludes with exercises involving computing convolutions using different methods.
This document discusses fast Fourier transform (FFT) algorithms. It provides an overview of FFTs and how they are more efficient than direct computation of the discrete Fourier transform (DFT). It describes decimation-in-time and decimation-in-frequency FFT algorithms and how they exploit properties of the DFT. The document also gives an example of calculating an 8-point DFT using the radix-2 decimation-in-frequency algorithm.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This paper proposes modeling and identification of dynamical systems in delta
domain using neural network. The properties of delta operator are used such as greater
numerical robustness in computation and superior coefficients representation in finite word
length in implementation and well ensured numerical conditioning at high sampling
frequency. To formulate the identification scheme delta operator model is recasted into a
realizable neural network structure using the properties of inverse delta operator.
Digital Signal Processing[ECEG-3171]-Ch1_L03Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced.
#Africa#Ethiopia
Digital Signal Processing[ECEG-3171]-Ch1_L02Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced
#Africa#Ethiopia
This document provides an introduction and overview of algorithms and analysis of algorithms. It begins with definitions of algorithms and what constitutes an algorithm. It then discusses analyzing algorithms to determine their efficiency, including analyzing worst-case, average-case, and best-case scenarios. It introduces asymptotic notation used to describe algorithm running times at different scales, including big-O notation. It also discusses computational models and analyzing algorithms in terms of counting the basic operations they perform. The document is presented as a lecture on algorithms with definitions, examples, and outlines of key topics to cover.
Digital Signal Processing[ECEG-3171]-Ch1_L07Rediet Moges
This document is a lecture on discrete-time convolutions. It discusses various methods for processing discrete-time samples, including block processing and sample processing. Block processing deals with finite blocks of data and applications like FIR filtering and DFT computations. Sample processing is used in real-time applications like filtering and control systems. The document presents different equivalent forms of the convolution operation, including direct form, LTI form, matrix form, and overlap-add. It provides examples of computing convolution using these different forms. The document concludes with exercises involving computing convolutions using different methods.
This document discusses fast Fourier transform (FFT) algorithms. It provides an overview of FFTs and how they are more efficient than direct computation of the discrete Fourier transform (DFT). It describes decimation-in-time and decimation-in-frequency FFT algorithms and how they exploit properties of the DFT. The document also gives an example of calculating an 8-point DFT using the radix-2 decimation-in-frequency algorithm.
Towards a stable definition of Algorithmic RandomnessHector Zenil
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov complexity of a string s. We present a summary of the approach we've developed to overcome the problem by calculating its algorithmic probability and evaluating the algorithmic complexity via the coding theorem, thereby providing a stable framework for Kolmogorov complexity even for short strings. We also show that reasonable formalisms produce reasonable complexity classifications.
Information Content of Complex NetworksHector Zenil
This short talk given in Stockholm, Sweden, explains how algorithmic complexity measures, notably Kolmogorov complexity approximated both by lossless compression algorithms and the Block Decomposition Method (BDM) are capable of characterizing graphs and networks by some of their group-theoretic and topological properties, notably graph automorphism group size and clustering coefficients of complex networks. The method distinguished between models of networks such as regular, random, small-world and scale-free.
COMPUTATIONAL PERFORMANCE OF QUANTUM PHASE ESTIMATION ALGORITHMcsitconf
A quantum computation problem is discussed in this paper. Many new features that make
quantum computation superior to classical computation can be attributed to quantum coherence
effect, which depends on the phase of quantum coherent state. Quantum Fourier transform
algorithm, the most commonly used algorithm, is introduced. And one of its most important
applications, phase estimation of quantum state based on quantum Fourier transform, is
presented in details. The flow of phase estimation algorithm and the quantum circuit model are
shown. And the error of the output phase value, as well as the probability of measurement, is
analysed. The probability distribution of the measuring result of phase value is presented and
the computational efficiency is discussed.
COMPUTATIONAL PERFORMANCE OF QUANTUM PHASE ESTIMATION ALGORITHMcscpconf
A quantum computation problem is discussed in this paper. Many new features that make quantum computation superior to classical computation can be attributed to quantum coherence
effect, which depends on the phase of quantum coherent state. Quantum Fourier transform algorithm, the most commonly used algorithm, is introduced. And one of its most important
applications, phase estimation of quantum state based on quantum Fourier transform, is presented in details. The flow of phase estimation algorithm and the quantum circuit model are
shown. And the error of the output phase value, as well as the probability of measurement, is analysed. The probability distribution of the measuring result of phase value is presented and the computational efficiency is discussed.
Fractal dimension versus Computational ComplexityHector Zenil
We investigate connections and tradeoffs between two important complexity measures: fractal dimension and computational (time) complexity. We report exciting results applied to space-time diagrams of small Turing machines with precise mathematical relations and formal conjectures connecting these measures. The preprint of the paper is available at: http://arxiv.org/abs/1309.1779
Fractal Dimension of Space-time Diagrams and the Runtime Complexity of Small ...Hector Zenil
Complexity measures are designed to capture complex behaviour and to quantify how complex that particular behaviour is. If a certain phenomenon is genuinely complex this means that it does not all of a sudden becomes simple by just translating the phenomenon to a different setting or framework with a different complexity value. It is in this sense that we expect different complexity measures from possibly entirely different fields to be related to each other. This work presents our work on a beautiful connection between the fractal dimension of space-time diagrams of Turing machines and their time complexity. Presented at Machines, Computations and Universality (MCU) 2013, Zurich, Switzerland.
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشپروژه مارکت
بررسی دو روش شناسایی سیستم متغیر بازمان با جزییات و توضیحات ساده و گویا و توضیح روابط و علایم به طور کامل به همراه مثال و شبیه سازی در متلب و فایل های مرجع و مقالات مرجع
روش اول: شناسایی سیستم با پارامترهای متغیر با زمان با استفاده از Least Square
روش دوم: شناسایی سیستم دینامیکی با استفاده از شبکه های عصبی مصنوعی
مناسب برای ارائه به عنوان تحقیق یا پروژه درس شناسایی سیستم ها
به همراه شبیه سازی در متلب و گزارش 20 صفحه ای
https://www.prjmarket.com/product/%d8%a8%d8%b1%d8%b1%d8%b3%db%8c-%d8%af%d9%88-%d8%b1%d9%88%d8%b4-%d8%b4%d9%86%d8%a7%d8%b3%d8%a7%db%8c%db%8c-%d8%b3%db%8c%d8%b3%d8%aa%d9%85-%d9%87%d8%a7%db%8c-%d9%85%d8%aa%d8%ba%db%8c%d8%b1-%d8%a8%d8%a7/
This document proposes a dynamic clustering algorithm using fuzzy c-means clustering. It begins with an introduction to fuzzy c-means clustering and its limitations when the chosen number of clusters is incorrect. It then proposes a dynamic clustering algorithm that starts with a fixed number of clusters but can automatically increase the number of clusters during iterations based on the data, improving purity. The algorithm is described and examples are provided to illustrate its effectiveness at forming clear clusters after iterations and determining when clustering has terminated.
The document discusses Bayesian neural networks and related topics. It covers Bayesian neural networks, stochastic neural networks, variational autoencoders, and modeling prediction uncertainty in neural networks. Key points include using Bayesian techniques like MCMC and variational inference to place distributions over the weights of neural networks, modeling both model parameters and predictions as distributions, and how this allows capturing uncertainty in the network's predictions.
Universal Approximation Property via Quantum Feature Maps
----
The quantum Hilbert space can be used as a quantum-enhanced feature space in machine learning (ML) via the quantum feature map to encode classical data into quantum states. We prove the ability to approximate any continuous function with optimal approximation rate via quantum ML models in typical quantum feature maps.
---
Contributed talk at Quantum Techniques in Machine Learning 2021, Tokyo, November 8-12 2021.
By Quoc Hoan Tran, Takahiro Goto and Kohei Nakajima
Lecture 3 image sampling and quantizationVARUN KUMAR
This document discusses image sampling and quantization. It begins by covering 2D sampling of images, including the spectrum of sampled images and the Nyquist criteria for proper reconstruction. It then covers quantization, describing how continuous variables are mapped to discrete levels. The document focuses on Lloyd-Max quantization, which minimizes mean square error for a given number of quantization levels. It provides equations for calculating optimal decision levels and reconstruction levels to design an optimum quantizer based on the probability density function of the signal. Common probability densities used for image data, such as Gaussian, Laplacian, and uniform, are also covered.
This document compares different methods for performing least squares calculations, which are commonly used in statistical analysis. It shows that using the crossprod and Matrix functions from the Matrix package provides faster and more numerically stable solutions than the naive matrix multiplication and inverse approach, especially for large, sparse model matrices. The Matrix package classes also allow reusing factorizations to solve multiple least squares problems more efficiently.
It tells about the connection of different wireless sensors so that data can be shared between them.the information provided by this environment is accessed by the user through internet.It has various topologies and protocols which you can see in this ppt.
TCP & UDP Streaming Comparison and a Study on DCCP & SCTP ProtocolsPeter SHIN
As a graduate student work, I have compared the performance between TCP and UDP media streaming with empirical results. Also, I have researched on different attempts on UDP to be more reliable, but why its progress has not been as fast as possible
This document is a project report submitted by four students - Apeksha A. Jain, Rohit M. Kulkarni, Soham C. Wadekar, and Kedar D. Wagholikar - for their Bachelor of Engineering degree. The report details a project on dynamic routing of packets in wireless sensor networks conducted under the guidance of Prof. G.R. Pathak. The project aims to implement clustering in a wireless sensor network and analyze the effects of increasing cluster size on cluster head energy. It further aims to implement an energy efficient dynamic algorithm to re-elect cluster heads periodically in order to save energy. The report presents the background, problem statement, project planning, analysis, design,
Wireless sensor networks (WSN) can be used in precision agriculture to monitor various parameters like temperature, humidity, and soil conditions. The network is made up of small sensor nodes called motes that self-organize to communicate sensory data to a gateway. This allows farmers to selectively harvest crops, monitor crop health over time, and view sensor measurements on a web application for historical analysis and geostatistical modeling. However, the technology faces limitations due to the small size and limited resources of the sensor nodes.
This document presents a fault management mechanism for wireless sensor networks. It discusses fault detection and diagnosis through self-detection by sensor nodes and active detection by cell managers. It also discusses fault recovery through waking sleeping nodes, moving mobile nodes, or selecting a secondary cell manager. The document then describes the network and fault models, and presents an algorithm for faulty sensor detection based on sensor measurements and designating sensor statuses as good, low quality, faulty, or good detected.
India has an immense diversity of cultures, religions, languages, and traditions spread across its varied geography. Some key aspects that represent India's culture include:
- Hinduism, Islam, Christianity, Sikhism, Buddhism and Jainism coexist alongside numerous regional traditions and tribal religions.
- Hindi is the national language but India has over 1600 dialects and 22 official languages spoken.
- Traditional Indian cuisine varies regionally but often involves eating with the right hand and using flatbread to scoop curries. Meals usually end with yogurt and rice.
- India has numerous festivals celebrated differently in various parts of the country, from Holi to Diwali to regional harvest festivals.
- Clothing, music
5. convolution and correlation of discrete time signals MdFazleRabbi18
This document discusses convolution and correlation of discrete time signals. It defines convolution as a mathematical way of combining two signals to form a third signal, which is equivalent to finite impulse response filtering. Convolution relates the input, output, and impulse response of a linear time-invariant system. The document also provides examples of discrete linear convolution and periodic convolution. It then defines correlation as a measure of similarity between signals, discussing cross-correlation and auto-correlation, and providing examples of calculating each.
This document provides an introduction to digital signal processing. It discusses how signals can be represented digitally by sampling analog signals and converting them to sequences of numbers. This allows signals to be processed using digital processors. Some key benefits of digital signal processing include accuracy, repeatability, flexibility, and easy implementation of nonlinear and time-varying operations in software. The document covers topics such as sampling, analog-to-digital conversion, reconstruction, discrete-time signals and systems, linearity, time-invariance, and examples of basic sequences like sinusoidal, exponential, and geometric sequences.
Super-resolution reconstruction is a method for reconstructing higher resolution images from a set of low resolution observations. The sub-pixel differences among different observations of the same scene allow to create higher resolution images with better quality. In the last thirty years, many methods for creating high resolution images have been proposed. However, hardware implementations of such methods are limited. Wiener filter design is one of the techniques we will use initially for this process. Wiener filter design involves matrix inversion. A novel method for the matrix inversion has been proposed in the report. QR decomposition will be the computational algorithm used using Givens Rotation.
Towards a stable definition of Algorithmic RandomnessHector Zenil
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov complexity of a string s. We present a summary of the approach we've developed to overcome the problem by calculating its algorithmic probability and evaluating the algorithmic complexity via the coding theorem, thereby providing a stable framework for Kolmogorov complexity even for short strings. We also show that reasonable formalisms produce reasonable complexity classifications.
Information Content of Complex NetworksHector Zenil
This short talk given in Stockholm, Sweden, explains how algorithmic complexity measures, notably Kolmogorov complexity approximated both by lossless compression algorithms and the Block Decomposition Method (BDM) are capable of characterizing graphs and networks by some of their group-theoretic and topological properties, notably graph automorphism group size and clustering coefficients of complex networks. The method distinguished between models of networks such as regular, random, small-world and scale-free.
COMPUTATIONAL PERFORMANCE OF QUANTUM PHASE ESTIMATION ALGORITHMcsitconf
A quantum computation problem is discussed in this paper. Many new features that make
quantum computation superior to classical computation can be attributed to quantum coherence
effect, which depends on the phase of quantum coherent state. Quantum Fourier transform
algorithm, the most commonly used algorithm, is introduced. And one of its most important
applications, phase estimation of quantum state based on quantum Fourier transform, is
presented in details. The flow of phase estimation algorithm and the quantum circuit model are
shown. And the error of the output phase value, as well as the probability of measurement, is
analysed. The probability distribution of the measuring result of phase value is presented and
the computational efficiency is discussed.
COMPUTATIONAL PERFORMANCE OF QUANTUM PHASE ESTIMATION ALGORITHMcscpconf
A quantum computation problem is discussed in this paper. Many new features that make quantum computation superior to classical computation can be attributed to quantum coherence
effect, which depends on the phase of quantum coherent state. Quantum Fourier transform algorithm, the most commonly used algorithm, is introduced. And one of its most important
applications, phase estimation of quantum state based on quantum Fourier transform, is presented in details. The flow of phase estimation algorithm and the quantum circuit model are
shown. And the error of the output phase value, as well as the probability of measurement, is analysed. The probability distribution of the measuring result of phase value is presented and the computational efficiency is discussed.
Fractal dimension versus Computational ComplexityHector Zenil
We investigate connections and tradeoffs between two important complexity measures: fractal dimension and computational (time) complexity. We report exciting results applied to space-time diagrams of small Turing machines with precise mathematical relations and formal conjectures connecting these measures. The preprint of the paper is available at: http://arxiv.org/abs/1309.1779
Fractal Dimension of Space-time Diagrams and the Runtime Complexity of Small ...Hector Zenil
Complexity measures are designed to capture complex behaviour and to quantify how complex that particular behaviour is. If a certain phenomenon is genuinely complex this means that it does not all of a sudden becomes simple by just translating the phenomenon to a different setting or framework with a different complexity value. It is in this sense that we expect different complexity measures from possibly entirely different fields to be related to each other. This work presents our work on a beautiful connection between the fractal dimension of space-time diagrams of Turing machines and their time complexity. Presented at Machines, Computations and Universality (MCU) 2013, Zurich, Switzerland.
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشپروژه مارکت
بررسی دو روش شناسایی سیستم متغیر بازمان با جزییات و توضیحات ساده و گویا و توضیح روابط و علایم به طور کامل به همراه مثال و شبیه سازی در متلب و فایل های مرجع و مقالات مرجع
روش اول: شناسایی سیستم با پارامترهای متغیر با زمان با استفاده از Least Square
روش دوم: شناسایی سیستم دینامیکی با استفاده از شبکه های عصبی مصنوعی
مناسب برای ارائه به عنوان تحقیق یا پروژه درس شناسایی سیستم ها
به همراه شبیه سازی در متلب و گزارش 20 صفحه ای
https://www.prjmarket.com/product/%d8%a8%d8%b1%d8%b1%d8%b3%db%8c-%d8%af%d9%88-%d8%b1%d9%88%d8%b4-%d8%b4%d9%86%d8%a7%d8%b3%d8%a7%db%8c%db%8c-%d8%b3%db%8c%d8%b3%d8%aa%d9%85-%d9%87%d8%a7%db%8c-%d9%85%d8%aa%d8%ba%db%8c%d8%b1-%d8%a8%d8%a7/
This document proposes a dynamic clustering algorithm using fuzzy c-means clustering. It begins with an introduction to fuzzy c-means clustering and its limitations when the chosen number of clusters is incorrect. It then proposes a dynamic clustering algorithm that starts with a fixed number of clusters but can automatically increase the number of clusters during iterations based on the data, improving purity. The algorithm is described and examples are provided to illustrate its effectiveness at forming clear clusters after iterations and determining when clustering has terminated.
The document discusses Bayesian neural networks and related topics. It covers Bayesian neural networks, stochastic neural networks, variational autoencoders, and modeling prediction uncertainty in neural networks. Key points include using Bayesian techniques like MCMC and variational inference to place distributions over the weights of neural networks, modeling both model parameters and predictions as distributions, and how this allows capturing uncertainty in the network's predictions.
Universal Approximation Property via Quantum Feature Maps
----
The quantum Hilbert space can be used as a quantum-enhanced feature space in machine learning (ML) via the quantum feature map to encode classical data into quantum states. We prove the ability to approximate any continuous function with optimal approximation rate via quantum ML models in typical quantum feature maps.
---
Contributed talk at Quantum Techniques in Machine Learning 2021, Tokyo, November 8-12 2021.
By Quoc Hoan Tran, Takahiro Goto and Kohei Nakajima
Lecture 3 image sampling and quantizationVARUN KUMAR
This document discusses image sampling and quantization. It begins by covering 2D sampling of images, including the spectrum of sampled images and the Nyquist criteria for proper reconstruction. It then covers quantization, describing how continuous variables are mapped to discrete levels. The document focuses on Lloyd-Max quantization, which minimizes mean square error for a given number of quantization levels. It provides equations for calculating optimal decision levels and reconstruction levels to design an optimum quantizer based on the probability density function of the signal. Common probability densities used for image data, such as Gaussian, Laplacian, and uniform, are also covered.
This document compares different methods for performing least squares calculations, which are commonly used in statistical analysis. It shows that using the crossprod and Matrix functions from the Matrix package provides faster and more numerically stable solutions than the naive matrix multiplication and inverse approach, especially for large, sparse model matrices. The Matrix package classes also allow reusing factorizations to solve multiple least squares problems more efficiently.
It tells about the connection of different wireless sensors so that data can be shared between them.the information provided by this environment is accessed by the user through internet.It has various topologies and protocols which you can see in this ppt.
TCP & UDP Streaming Comparison and a Study on DCCP & SCTP ProtocolsPeter SHIN
As a graduate student work, I have compared the performance between TCP and UDP media streaming with empirical results. Also, I have researched on different attempts on UDP to be more reliable, but why its progress has not been as fast as possible
This document is a project report submitted by four students - Apeksha A. Jain, Rohit M. Kulkarni, Soham C. Wadekar, and Kedar D. Wagholikar - for their Bachelor of Engineering degree. The report details a project on dynamic routing of packets in wireless sensor networks conducted under the guidance of Prof. G.R. Pathak. The project aims to implement clustering in a wireless sensor network and analyze the effects of increasing cluster size on cluster head energy. It further aims to implement an energy efficient dynamic algorithm to re-elect cluster heads periodically in order to save energy. The report presents the background, problem statement, project planning, analysis, design,
Wireless sensor networks (WSN) can be used in precision agriculture to monitor various parameters like temperature, humidity, and soil conditions. The network is made up of small sensor nodes called motes that self-organize to communicate sensory data to a gateway. This allows farmers to selectively harvest crops, monitor crop health over time, and view sensor measurements on a web application for historical analysis and geostatistical modeling. However, the technology faces limitations due to the small size and limited resources of the sensor nodes.
This document presents a fault management mechanism for wireless sensor networks. It discusses fault detection and diagnosis through self-detection by sensor nodes and active detection by cell managers. It also discusses fault recovery through waking sleeping nodes, moving mobile nodes, or selecting a secondary cell manager. The document then describes the network and fault models, and presents an algorithm for faulty sensor detection based on sensor measurements and designating sensor statuses as good, low quality, faulty, or good detected.
India has an immense diversity of cultures, religions, languages, and traditions spread across its varied geography. Some key aspects that represent India's culture include:
- Hinduism, Islam, Christianity, Sikhism, Buddhism and Jainism coexist alongside numerous regional traditions and tribal religions.
- Hindi is the national language but India has over 1600 dialects and 22 official languages spoken.
- Traditional Indian cuisine varies regionally but often involves eating with the right hand and using flatbread to scoop curries. Meals usually end with yogurt and rice.
- India has numerous festivals celebrated differently in various parts of the country, from Holi to Diwali to regional harvest festivals.
- Clothing, music
5. convolution and correlation of discrete time signals MdFazleRabbi18
This document discusses convolution and correlation of discrete time signals. It defines convolution as a mathematical way of combining two signals to form a third signal, which is equivalent to finite impulse response filtering. Convolution relates the input, output, and impulse response of a linear time-invariant system. The document also provides examples of discrete linear convolution and periodic convolution. It then defines correlation as a measure of similarity between signals, discussing cross-correlation and auto-correlation, and providing examples of calculating each.
This document provides an introduction to digital signal processing. It discusses how signals can be represented digitally by sampling analog signals and converting them to sequences of numbers. This allows signals to be processed using digital processors. Some key benefits of digital signal processing include accuracy, repeatability, flexibility, and easy implementation of nonlinear and time-varying operations in software. The document covers topics such as sampling, analog-to-digital conversion, reconstruction, discrete-time signals and systems, linearity, time-invariance, and examples of basic sequences like sinusoidal, exponential, and geometric sequences.
Super-resolution reconstruction is a method for reconstructing higher resolution images from a set of low resolution observations. The sub-pixel differences among different observations of the same scene allow to create higher resolution images with better quality. In the last thirty years, many methods for creating high resolution images have been proposed. However, hardware implementations of such methods are limited. Wiener filter design is one of the techniques we will use initially for this process. Wiener filter design involves matrix inversion. A novel method for the matrix inversion has been proposed in the report. QR decomposition will be the computational algorithm used using Givens Rotation.
DSP_2018_FOEHU - Lec 03 - Discrete-Time Signals and SystemsAmr E. Mohamed
The document discusses discrete-time signals and systems. It defines discrete-time signals as sequences represented by x[n] and discusses important sequences like the unit sample, unit step, and periodic sequences. It then defines discrete-time systems as devices that take a discrete-time signal x(n) as input and produce another discrete-time signal y(n) as output. The document classifies systems as static vs. dynamic, time-invariant vs. time-varying, linear vs. nonlinear, and causal vs. noncausal. It provides examples to illustrate each classification.
This document summarizes a research paper that proposes a novel architecture for implementing a 1D lifting integer wavelet transform (IWT) using residue number system (RNS). The key aspects covered are:
1) RNS offers advantages over binary representations for digital signal processing by avoiding carry propagation. A ROM-based approach is proposed for RNS division.
2) The lifting scheme for discrete wavelet transforms is summarized, including split, predict, and update stages.
3) A novel RNS-based architecture is proposed using three main blocks - split, predict, and update - that repeat at each decomposition level. Pipelined implementations of the predict and update blocks are detailed.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...IOSRJVSP
This document presents a neural network approach to channel equalization using a multilayer perceptron with a variable learning rate parameter. Specifically, it proposes modifying the backpropagation algorithm to allow the learning rate to adapt at each iteration in order to achieve faster convergence. The equalizer structure is a decision feedback equalizer modeled as a neural network with an input, hidden and output layer. Simulation results show the proposed variable learning rate approach improves bit error rate and convergence speed compared to a standard backpropagation algorithm.
A Method for the Reduction 0f Linear High Order MIMO Systems Using Interlacin...IJMTST Journal
This document presents a new method for reducing the order of linear multi-input multi-output (MIMO) systems. The method obtains the denominator polynomial of the reduced order model using an interlacing property of the roots of the even and odd parts of the original system's denominator polynomial. The numerator polynomial is obtained using a factor division technique. The method is illustrated through an example of reducing a 4th order MIMO system to 2nd order. Response characteristics of the original and reduced systems are compared, showing the reduced model matches the time response of the original system well.
This document outlines the syllabus and course objectives for the digital signal processing course ECE2006 being offered in the fall semester of 2021. The course aims to teach students concepts related to signals and systems in the time and frequency domains, design of analog and digital filters, and realization of digital filters using various structures. The syllabus is divided into 7 modules covering topics such as Fourier analysis, design of IIR and FIR filters, and realization of lattice filters. Students will be evaluated through continuous assessments, quizzes, assignments, and a final exam.
Performance Assessment of Polyphase Sequences Using Cyclic Algorithmrahulmonikasharma
Polyphase Sequences (known as P1, P2, Px, Frank) exist for a square integer length with good auto correlation properties are helpful in the several applications. Unlike the Barker and Binary Sequences which exist for certain length and exhibits a maximum of two digit merit factor. The Integrated Sidelobe level (ISL) is often used to define excellence of the autocorrelation properties of given Polyphase sequence. In this paper, we present the application of Cyclic Algorithm named CA which minimizes the ISL (Integrated Sidelobe Level) related metric which in turn improve the Merit factor to a greater extent is main thing in applications like RADAR, SONAR and communications. To illustrate the performance of the P1, P2, Px, Frank sequences when cyclic Algorithm is applied. we presented a number of examples for integer lengths. CA(Px) sequence exhibits the good Merit Factor among all the Polyphase sequences that are considered.
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and SystemsAmr E. Mohamed
This document provides an overview of discrete-time signals and systems in MATLAB. It defines discrete signals as sequences represented by x(n) and how they can be implemented as vectors in MATLAB. It describes various types of sequences like unit sample, unit step, exponential, sinusoidal, and random. It also covers operations on sequences like addition, multiplication, scaling, shifting, folding, and correlations. Discrete time systems are defined as operators that transform an input sequence x(n) to an output sequence y(n). Key properties discussed are linearity, time-invariance, stability, causality, and the use of convolution to represent the output of a linear time-invariant system. Examples are provided to demonstrate various concepts.
Digital Signal Processing (DSP) from basics introduction to medium level book based on Anna University Syllabus! This is just a share of worthfull book!
-Prabhaharan Ellaiyan
-prabhaharan429@gmail.com
-www.insmartworld.blogspot.in
1. The document summarizes a lecture on discrete-time signals and systems.
2. It defines different types of signals, including discrete-time and discrete-valued signals which are relevant for digital filter theory.
3. It also classifies systems as static vs. dynamic, time-invariant vs. time-variable, linear vs. nonlinear, causal vs. non-causal, stable vs. unstable, and recursive vs. non-recursive.
4. It describes the time-domain representation of linear, time-invariant (LTI) systems using impulse response and convolution.
The document summarizes key concepts about linear time-invariant (LTI) systems from Chapter 2. It discusses:
1) LTI systems can be modeled as the sum of their impulse responses weighted by the input signal. This is known as the convolution sum/integral for discrete/continuous-time systems.
2) Any signal can be represented as a linear combination of shifted unit impulses. The output of an LTI system is the convolution of the input signal with the system's impulse response.
3) The impulse response completely characterizes an LTI system. The output is found by taking the convolution integral or sum of the input signal with the impulse response.
This document summarizes a research paper that proposes a novel modified distributed arithmetic scheme for implementing least mean square (LMS) adaptive filters with reduced area and power consumption. Some key points:
- Distributed arithmetic replaces multiplications with lookup tables, reducing area and complexity compared to multiplier-based implementations.
- The proposed scheme uses carry select adders for accumulation instead of previous carry save adders, reducing critical path length and improving efficiency.
- It allows concurrent lookup table updates and parallel filtering and weight updates to improve throughput.
- Implementations of adaptive filters with lengths of 16 and 32 using this scheme consumed 116mW and 158mW of power respectively, with area reductions compared to previous designs.
This document compares the performance of conventional and efficient square root algorithms for detection in vertical Bell Laboratories layered space-time (V-BLAST) multiple-input multiple-output (MIMO) wireless communications. The efficient square root algorithm aims to reduce computational complexity by avoiding matrix inversion and squaring operations through orthogonal and unitary transformations. Simulation results show that the efficient square root algorithm achieves a reduction of 1x106 floating point operations compared to the conventional algorithm employing minimum mean square error detection with 16 transmitting and receiving antennas, while maintaining performance. The efficient square root algorithm also exhibits better bit error rate and symbol error rate at lower modulation schemes and increased number of antennas compared to zero-forcing detection.
EC8553 Discrete time signal processing ssuser2797e4
This document contains a 10 question, multiple choice exam on discrete time signal processing. It covers topics like the discrete Fourier transform (DFT), finite word length effects, fixed point vs floating point representation, and FIR filter design. Specifically, it includes questions that calculate the 4 point DFT of a sequence, define twiddle factors, compare DIT and DIF FFT algorithms, and discuss stability and causality of systems.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Simulink based design simulations of band pass fir filtereSAT Journals
Abstract In this paper, window function method is used to design digital filters. The Band Pass filter has been design with help of Simulink in MATLAB, which have better characteristics of devising filter in fast and effective way. The band pass filter has been design and simulated using Kaiser window technique. This model is established by using Simulink in MATLAB and the filtered waveforms are observed by spectrum scope to analyze the performance of the filter. Keywords: FIR, window function method, Kaiser, Simulink, MATLAB.
Similar to A novel approach for high speed convolution of finite and infinite length sequences using vedic mathematics (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
This document summarizes a study on stabilizing expansive black cotton soil with the natural inorganic stabilizer RBI-81. Laboratory tests were conducted to evaluate the effect of RBI-81 on the soil's engineering properties. The tests showed that with 2% RBI-81 and 28 days of curing, the unconfined compressive strength increased by around 250% and the CBR value improved by approximately 400% compared to the untreated soil. Overall, the study found that RBI-81 effectively improved the strength properties of the black cotton soil and its suitability as a soil stabilizer was supported.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
This document summarizes a study on the influence of compaction energy on soil stabilized with a chemical stabilizer. Laboratory tests were conducted on locally available loamy soil treated with a patented polymer liquid stabilizer and compacted at four different energy levels. The study found that increasing the compaction effort increased the density of both untreated and treated soil, but the rate of increase was lower for stabilized soil. Treating the soil with the stabilizer improved its unconfined compressive strength and resilient modulus, and reduced accumulated plastic strain, with these properties further improved by higher compaction efforts. The stabilized soil exhibited strength and performance benefits compared to the untreated soil.
Geographical information system (gis) for water resources managementeSAT Journals
This document describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) to meet the information needs of various government departments related to water management in a state. The HIS consists of a hydrological database coupled with tools for collecting and analyzing spatial and non-spatial water resources data. It also incorporates a hydrological model to indirectly assess water balance components over space and time. A web-based GIS portal was created to allow users to access and visualize the hydrological data, as well as outputs from the SWAT hydrological model. The framework is intended to facilitate integrated water resources planning and management across different administrative levels.
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
This document summarizes an experimental study that tested circular concrete-filled steel tube columns with varying parameters. 45 specimens were tested with different fiber percentages (0-2%), tube diameter-to-wall-thickness ratios (D/t from 15-25), and length-to-diameter (L/d) ratios (from 2.97-7.04). The results found that columns filled with fiber-reinforced concrete exhibited higher stiffness, equal ductility, and enhanced energy absorption compared to those filled with plain concrete. The load carrying capacity increased with fiber content up to 1.5% but not at 2.0%. The analytical predictions of failure load closely matched the experimental values.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
This document evaluates the operational efficiency of an urban road network in Tiruchirappalli, India using travel time reliability measures. Traffic volume and travel times were collected using video data from 8-10 AM on various roads. Average travel times, 95th percentile travel times, and buffer time indexes were calculated to assess reliability. Non-motorized vehicles were found to most impact reliability on one road. A relationship between buffer time index and traffic volume was developed. Finally, a travel time model was created and validated based on length, speed, and volume.
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
This document summarizes a study that used remote sensing and GIS techniques to estimate morphometric parameters and runoff for the Yagachi catchment area in India over a 10-year period. Morphometric analysis was conducted to understand the hydrological response at the micro-watershed level. Daily runoff was estimated using the SCS curve number model. The results showed a positive correlation between rainfall and runoff. Land use/land cover changes between 2001-2010 were found to impact estimated runoff amounts. Remote sensing approaches provided an effective means to model runoff for this large, ungauged area.
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...amsjournal
The Fourth Industrial Revolution is transforming industries, including healthcare, by integrating digital,
physical, and biological technologies. This study examines the integration of 4.0 technologies into
healthcare, identifying success factors and challenges through interviews with 70 stakeholders from 33
countries. Healthcare is evolving significantly, with varied objectives across nations aiming to improve
population health. The study explores stakeholders' perceptions on critical success factors, identifying
challenges such as insufficiently trained personnel, organizational silos, and structural barriers to data
exchange. Facilitators for integration include cost reduction initiatives and interoperability policies.
Technologies like IoT, Big Data, AI, Machine Learning, and robotics enhance diagnostics, treatment
precision, and real-time monitoring, reducing errors and optimizing resource utilization. Automation
improves employee satisfaction and patient care, while Blockchain and telemedicine drive cost reductions.
Successful integration requires skilled professionals and supportive policies, promising efficient resource
use, lower error rates, and accelerated processes, leading to optimized global healthcare outcomes.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Introduction to AI Safety (public presentation).pptx
A novel approach for high speed convolution of finite and infinite length sequences using vedic mathematics
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 654
A NOVEL APPROACH FOR HIGH SPEED CONVOLUTION OF FINITE
AND INFINITE LENGTH SEQUENCES USING VEDIC MATHEMATICS
M. Bharathi1
, D. Leela Rani2
1
Assistant Professor, 2
Associate Professor, Department of ECE, Sree Vidyanikethan Engineering College, Tirupati, India,
bharathi891@gmail.com, dlrani79@gmail.com
Abstract
Digital signal processing, Digital control systems, Telecommunication, Audio and Video processing are important applications in
VLSI. Design and implementation of DSP systems with advances in VLSI demands low power, efficiency in energy, portability,
reliability and miniaturization. In digital signal processing, linear-time invariant systems are important sub-class of systems and are
the heart and soul of DSP.
In many application areas, linear and circular convolution are fundamental computations. Convolution with very long sequences is
often required. Discrete linear convolution of two finite-length and infinite length sequences using circular convolution on for
Overlap-Add and Overlap-Save methods can be computed. In real-time signal processing, circular convolution is much more
effective than linear convolution. Circular convolution is simpler to compute and produces less output samples compared to linear
convolution. Also linear convolution can be computed from circular convolution. In this paper, both linear, circular convolutions are
performed using vedic multiplier architecture based on vertical and cross wise algorithm of Urdhva-Tiryabhyam. The implementation
uses hierarchical design approach which leads to improvement in computational speed, power reduction, minimization in hardware
resources and area. Coding is done using Verilog HDL. Simulation and synthesis are performed using Xilinx FPGA.
Keywords: Linear and Circular convolution, Urdhva - Tiryagbhyam, carry save multiplier, Overlap –Add/ Save Verilog
HDL.
----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
Systems are classified in accordance with a no. of characteristic
properties or categories, namely: linearity, causality, stability
and time variance. Linear, time-invariant systems are important
sub-class of systems. Urdhva-Tiryagbhyam sutra is used in
developing carry save multiplier architecture to perform
convolution of two finite and infinite length sequences [1].
Linear and circular convolutions, which are fundamental
computations in Linear time-invariant (LTI) systems are
implemented in Verilog HDL. Simulation and Synthesis are
verified in Xilinx 10.1 ISE.
Multiplications, in general are complex and slow in operation.
The overall speed in multiplication depends on number of
partial products generated, shifting the partial products based
on bit position and summation of partial products. In carry save
multiplier, the carry bits are passed diagonally downwards,
which requires a vector merging adder to obtain final sum of all
the partial products. In convolution, fundamental computations
includes multiplication and addition of input and impulse
signals or samples[2],[3].
2. CIRCULAR CONVOLUTION
Let x1(n) and x2(n) be two finite- duration sequences of length
N. Their respective N-point DFT’s are
1
2 /
1 1
0
N
j nk N
n
X K x n e
k= 0, 1… N-1 (1)
1
2 /
2 2
0
N
j nk N
n
X K x n e
k= 0, 1… N-1 (2)
If two DFT’s a multiplied together, the result is a DFT, X3(k) of
a sequence x3(n) of length N.
The relationship between X3(K) and sequences X1(k) and X2(k)
is
X3(k)=X1(k)X2(k) k=0,1,……N-1 (3)
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 655
The DFT of {x3(k)} is
1
3 1 2
0
N
N
n
x n x n x m n
m=0,1… N-1 (4)
Here
N
x m n x m n N
(5)
The above expression has the form of a convolution sum. It
involves the index ((m-n))N and is called circular
convolution[4].
It is not the ordinary linear convolution which relates the output
sequence y(n) of a linear system to the input sequence x(n) and
the impulse response h(n). Thus it can be concluded that the
multiplication of the DFT’s of two sequences is equivalent to
circular convolution of two sequences in the time domain.
The methods that are used to find the circular convolution of
two sequences are
a. Concentric circle method
b. Matrix multiplication method
Let x1(n) and x2(n) be two sequences of length L and M
respectively.
Let x3(n) be the output sequence. The length N, of the output
sequence, N= Max (L, M).
2.1. Concentric circle method
The length of x1(n) should be equal to length of x2(n) in order
to perform circular convolution using concentric circle method.
We have three cases here.
The length L of sequence x1(n) is equal to length M of
sequence x2(n). then the procedure explained below
can be followed directly.
If L>M then M is made equal to L by adding L-M
number of zero samples to the sequence, x2(n)
If M>L, then L is made equal to M by adding M-L
number of zero samples to the sequence x1(n).
After making the lengths of two sequences equal to N
samples the circular convolution using concentric
circle method between two sequences is performed
using following steps. The N samples of sequence
x1(n) are graphed as equally spaced points around an
outer circle in counter clockwise direction.
Starting at the same point as x1(n) the N samples of
x2(n) are graphed as equally spaced points in
clockwise direction around an inner circle.
The corresponding samples are multiplied on two
circles and the products are added to produce first
sample of output sequence, x3(n).
The samples on the inner circle are rotated one
position in counter clock wise direction successively
and step 3 is repeated to obtain the next sample of
output sequence x3(n).
Step 4 is repeated until the first sample of inner circle
lines up with the first sample of outer circle once
again. Hence all the samples of output sequence x3(n)
are collected.
2.2. Matrix Multiplication Method
Circular convolution of two sequences x1(n) and x2(n) is
obtained by representing the sequences in matrix form as
shown below
2 2 2 1 3
2 2 2 1 3
2 2 2 1 3
0 1 ...... 1 0 0
1 0 ....... 2 1 1
...... ... ....
1 2 ....... 0 1 1
x x N x x x
x x x x x
x N x N x x N x N
(6)
The columns of NxN matrix is formed by repeating the samples
of x2(n) via circular shift. The elements of column matrix are
the samples of sequence x1(n). The circular convolution of two
sequences, x3(n), is obtained by multiplying NxN matrix of
samples of x2(n) and column matrix which consists of samples
of x1(n).
3. LINEAR CONVOLUTION OF SHORT
DURATION SEQUENCE
In discrete time, the output sequence y[n] of a linear time
invariant system, with impulse response h[n] due to any input
sequence x[n] is the convolution sum of x[n] with h[n] and is
given as
*y n x n h n x k h n k
(7)
h[n] is the response of the system to impulse sequence, [n].
To implement discrete time convolution, the two sequence x[k]
and h[n-k] are multiplied together for -∞ < k < ∞ and the
products are summed to compute output samples of y[n].
Convolution sum serves as an explicit realization of a discrete-
time linear system. The above equation expresses each sample
of output sequence in terms of all samples of input and impulse
response sequence.
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 656
Fig. 1 Block diagram for computation of linear convolution
Let the length of input and impulse sequences, x[n] and h[n] be
L and M. The starting time of input and impulse sequences are
represented by n1 and n2 respectively.
Therefore, the length N, of output sequence y[n]= L+M-1 and
the starting time n = n1 + n2
The samples of output sequence is computed using convolution
sum
y n x k h n k
(8)
4. LINEAR CONVOLUTION OF LONG
DURATION SEQUENCE
In real time signal processing applications concerned with
signal monitoring and analysis linear filtering of signals is
involved. The input sequence x(n) is often a very long
sequence[5].
Practically, it is difficult to store a long duration input
sequence. So, in order to perform linear convolution of such a
long duration input sequence with the impulse response of a
system, the input sequence is divided into blocks. The
successive blocks are processed one at a time and the results are
combined to obtain the output sequence. The blocks are filtered
separately and results are combined using overlap save method
or overlap adds method [6].
Linear filtering performed via the DFT involves operations on a
block of data, which by necessity must be limited in size due to
limited memory of digital computers. A long input signal
sequence must be segmented to fixed-size blocks prior to
processing.
4.1 Overlap-Save Method
Let the length of long duration input sequence be LL. The
length of impulse response = M
The input sequence is divided into blocks of data. The length of
each block is N= L+M-1
Each block consists of last (M-1) data points of previous block
followed by L new data points for first block of data. The first
M-1 points are set to zero.
Therefore blocks of input sequence are
x1(n)= {0,0….0, x(0), x(n)…. x(L-1)}
The first (M-1) samples are zeros.
x2(n)= {x(L-M+1)… x(L-1), x(L)… x(2L-1)}
x(L-M+1)… x(L-1) are the last (M-1) samples and from x1(n)
and x(L0… x(2L-1) are L new samples
x3(n)= {x(2L-M+1)… x(2L-1), x(2L)… x(3L-1)}
x(2L-M+1)… x(2L-1) are the last (M-1) samples from x2(n)
x(2L)…. x(3L-1) are the L new samples
The length of impulse response is increased by appending L-1
zeros
Circular convolution of xi(n) and h(n) is computed for each
block, which leads to blocks of output sequences yi(n)
Because of aliasing the first (M-1) samples of each output
sequence yi(n) is discarded.
The final output sequence after discarding first (M-1) samples
of each output sequence yi(n) consists of samples of all blocks
arranged in sequential order.
4.2. Overlap-Add Method
In this method also the long direction input sequence is divided
into blocks of data.
The length of each block is L+M-1
The first L samples are new samples taken from long duration
input sequence and the last M-1 samples are zero appended to
have total length of samples as L+M-1
The data blocks are represented as
x1(n)= {x(0), x(1)… x(L-1), 0,0…}
x2(n)= {x(L), x(L+1)… x(2L-1), 0,0…}
x3(n)= {x(2L), x(2L+1)… x(3L-1), 0,0…}
The last M-1 samples in each sequence are zeros appended to
have total length as L+M-1
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 657
Similarly the length of impulse response is increased to L+M-1
by appending L-1 zeros to it.
Circular convolution is performed on each block of input
sequence with the impulse response to have blocks of output
sequences.
The last M-1 samples of each block of output sequence is
overlapped and added to the first M-1 samples of succeeding
block. The samples thus obtained are arranged in sequential
order to have the final output sequence y(n).
So this method is called as Overlap-Add method.
5. MULTIPLICATION TECHNIQUE
Jagadguru Swami Sri Bharati-Krishna Swamiji introduced his
research on mathematics based on sixteen sutras for
multiplication. A multiplier is the key block in Digital Signal
processing. In the increasing technology, researchers are trying
to design multipliers which offer high computational speed, less
delay, low power and area efficient arithmetic building blocks
[7].
In Linear Convolution, the multiplication is performed using
Urdhva-Tiryagbhyam Sutra of Vedic mathematics[8]. The
Comparison between number of multiplications and additions
in Conventional Mathematical approach and vedic mathematics
is shown. [9]
Table 1: Comparison between normal multiplication and vedic
mathematics multiplication
Normal multiplier Vedic multiplier
For 2 bit multiplication
No. of multiplications : 4
No. of additions :2
For 2 bit multiplication
No. of multiplications : 4
No. of additions :1
For 3 bit multiplication
No. of multiplications : 9
No. of additions :7
For 3 bit multiplication
No. of multiplications : 9
No. of additions :5
For 4 bit multiplication
No. of multiplications :
16
No. of additions
:15
For 4 bit multiplication
No. of multiplications : 16
No. of additions :9
For 8 bit multiplication
No. of multiplications :
64
No. of additions
:77
For 8 bit multiplication
No. of multiplications : 64
No. of additions :53
Example
Multiplication of 1234 and 2116
Adder
Step1:
4x6=24, 2, Sthe carry is placed below the second digit
Step2:
(3x6) + (4x1) = 22. 2, the carry is placed below the third digit.
Step3:
(2x6) + (4x1) + (3x1) = 19. 1, the carry is placed below the
fourth digit.
Step4:
(1x6) + (2x4) + (2x1) + (3x1) = 19. The carry 1 is placed
below the fifth digit.
Step5:
1 2 3 4
2 1 1 6
1 2 3 4
2 1 1 6
2 5 9 9 9 2
0 0 1 1 2 2
4
2 6 1 1 1 4 4
1 2 3 4
2 1 1 6
1 2 3 4
2 1 1 6
1 2 3 4
2 1 1 6
1 2 3 4
2 1 1 6
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 658
(1x1) + (3x2) + (2x1) = 9. The carry 0 is placed below the
sixth digit.
Step6:
(1x1) + (2x2) = 5. The carry 0 is placed below seventh digit.
Step7:
(1x 2)=2.
6. SIMULATION RESULTS
6.1. Circular Convolution
Fig. 2 Circular convolution output
Here input sequence is a(n)= [a3,a2,a1,a0]
Impulse sequence is b(n)= [b3,b2,b1,b0]
In this each value is of 4 bit length.
The given inputs are a(n)= [ 1, 2, 3, 4 ]
Impulse sequence is b(n)= [1, 1, 1, 0 ]
Output in hexadecimal format is (Each of 8 bit length)
Y(n)= [8’h 06,8’h06,8’h04,8’h05 ]
6.2. Linear Convolution for Short Duration Sequence
Fig. 3 Linear convolution for short duration sequence
Here input sequence is x(n)= [x3,x2,x1,x0]
Impulse sequence is h(n)= [h3,h2,h1,h0]
In this each value is of 4 bit length.
The given inputs are x(n)= [ 1, 2, 3, 4 ]
Impulse sequence is h(n)= [2, 3, 4, 5 ]
Output in hexadecimal format is(Each of 8 bit length)
Y(n)= [8’h 14,8’1f,8’22,8’h1E,8’h 10,8’h 07,8’h02 ]
6.3. Linear Convolution for Long Duration Sequence
Overlap-Add Method
Fig. 4 Linear convolution for long duration sequence Overlap-
Add method
1 2 3 4
2 1 1 6
1 2 3 4
2 1 1 6
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 659
Here, convolution is applied between sequences of lengths 12
and 2 respectively.
Input sequence x(n)= [ i0,i1,i2,i3,i4,i5,i6,i7,i8,i9,i10,i11 ]
Impulse sequence h(n)= [ h0,h1 ]
Convolved sequence y(n)= [g0,g1,g2,g3,g4,g5,g6,g7,g8,g9,
g10,g11,g12 ]
The example taken here is
x(n)= [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] (Decimal format)
h(n)= [4,5 ] (Decimal format)
y(n)= [8’h 3C,8’h67,8’h5E,8’h55,8’h4C,8’h43,8’h3A,8’h31,
8’h28,8’h1F,8’h16,8’h0D,8’h04 ]
6.4. Linear Convolution for Long Duration Sequence
Overlap- Save Method
Fig. 5 Linear convolution for long duration sequence Overlap-
Save method
Here, convolution is applied between sequences of lengths 12
and 2 respectively.
Input sequence x(n)= [ i0,i1,i2,i3,i4,i5,i6,i7,i8,i9,i10,i11 ]
Impulse sequence h(n)= [ h0,h1 ]
Convolved sequence y(n)= [g0,g1,g2,g3,g4,g5,g6,g7,g8,g9,
g10,g11,g12 ]
The example taken here is
x(n)= [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] (Decimal format)
h(n)= [4,5 ] (Decimal format)
y(n)= [8’h 3C,8’h67,8’h5E,8’h55,8’h4C,8’h43,8’h3A,8’h31,
8’h28,8’h1F,8’h16,8’h0D,8’h04 ]
CONCLUSIONS
Circular and Linear convolution of discrete finite and infinite
length sequences are performed using carry save multiplier
based on Vedic multiplication. The multiplier proposed in
this paper, using Vedic mathematics results in high
computation speed and minimum critical path, which results in
less delay, when compared to normal multiplier. Speed can be
further optimized by high performance adders.
REFERENCES
[1] M.Bharathi, D.Leela Rani, & S.Varadrajan “High speed
carry save multiplier based linear convolution using
Vedi mathematics”, International journal of computer
and techonolgy, volume 4,no.2, March- April 2013.ISSN
no.277-3061.
[2] Jan M. Rabaey, Anantha Chandrasasan Borivoje
Nikdic,2003, Digital Integrated Circuits-A Design
perspective, Prentice-Hall.
[3] Purushotam D. Chidgupkar and Mangesh T. Karad,
2004, “The Implementation of vedic Algorithms in
Digital Signal Processing”, Global J. of Engng. Educ.,
Vol.8, No.2, UICEE Published in Australia.
[4] J.G. Proakis and D.G. Monalkies,1988, Digital Signal
Processing. Macmillian.
[5] A.V. Oppenheim and R. Schafer, 1975, Discrete-Time
Signal Processing Englewood Cliffs, NJ:Prentice-Hall.
[6] Asmita Haveliya, Kamlesh Kumar Singh,2011, “A
Novel Approach For High Speed Block Convolution
Algorithm”, proc. Of the International Conference on
Advanced Computing and Vommunication Technologies
(ACCT).
[7] Jagadguru Swami Sri Bharati Krishna Tirthji
Maharaja,1986, “Vedic Mathematics”, Motilal
Banarsidas, Varanasi, India.
[8] Human Tharafu M.C. Jayalaxmi. H. Renuka R.K.,
Ravishankar. M.,2007, “A. high speed block convolution
using Ancient Indian Vedic Mathematics”, IEEE
International conference on computational intelligence
and multimedia applications.
[9] A.P. Nicholas, K.R Willaiams, J. Pickles, 2003,
Vertically and Crosswise applications of the Vedic
Mathematics Sutra, Motilal Banarsidass Publishers,
Delhi.
7. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 660
BIOGRAPHIES:
Ms. M. Bharathi, M.tech, is currently
working as an Assistant Professor in ECE
department of Sree Vidyanikethan
Engineering College, Tirupati. She has
completed M.tech in VLSI Design, in
Satyabhama University. Her research areas
are Digital System Design, VLSI Signal Processing.
Ms. D.Leela Rani received the M.Tech.
Degree from Sri Venkateswara University,
Tirupati. She is currently working towards
the Ph.D. degree in the Department of
Electronics and communication Engineering,
SVU College of Engineering,Tirupati.
Currently she is working as an Associate professor in Sree
Vidyanikethan Engineering College (Autonomous). Her
research areas include, Atmospheric Radar Signal Processing
and VLSI Signal Processing.