This document presents a VLSI architecture design for particle filtering to enable real-time state estimation. The design aims to take advantage of data-level parallelism in the particle filtering algorithm by distributing particles across many processing elements that work in parallel. A key part of the design is allocating hardware resources for computationally-intensive but parallelizable steps globally to be shared across processing elements, reducing the area needed for each element. The document outlines the particle filtering algorithm and an example radio frequency localization application. It then describes the proposed architecture featuring processing clusters with resampling modules and arrays of processing elements, allowing more particles to be processed in parallel through hardware resource sharing.
Projected Barzilai-Borwein Methods Applied to Distributed Compressive Spectru...Polytechnique Montreal
Cognitive radio allows unlicensed (cognitive) users to use licensed frequency bands by exploiting spectrum sensing techniques to detect whether or not the licensed (primary) users are present. In this paper, we present a compressed sensing applied to spectrum-occupancy detection in wide-band applications. The collected analog signals from each cognitive radio (CR) receiver at a fusion center are transformed to discrete-time signals by using analog-to-information converter (AIC) and then employed to calculate the autocorrelation. For signal reconstruction, we exploit a novel approach to solve the optimization problem consisting of minimizing both a quadratic (l2) error term and an l1-regularization term. In specific, we propose the Basic gradient projection (GP) and projected Barzilai-Borwein (PBB) algorithm to offer a better performance in terms of the mean squared error of the power spectrum density estimate and the detection probability of licensed signal occupancy.
This document proposes a holistic approach to reconstruct data in ocean sensor networks using compression sensing. It involves two key aspects:
1) A node reordering scheme is developed to improve the sparsity of signals in the discrete cosine transform or Fourier transform domain, reducing the number of measurements needed for accurate reconstruction.
2) An improved sparse adaptive tracking algorithm is adopted to estimate the sparse degree and then reconstruct the signal in a step-by-step manner, gradually converging on an accurate reconstruction even with unknown sparsity.
Simulation results show the proposed method can effectively reduce signal sparsity and accurately reconstruct signals, especially in cases of unknown sparsity.
Multi-phase-field simulations with OpenPhasePFHub PFHub
The document describes OpenPhase, an open-source phase field modeling toolbox for simulating microstructure evolution. OpenPhase uses a multi-phase field approach and includes modules for simulating processes like coarsening, diffusion, deformation, plasticity, damage, and fluid flow. It has been under development for over 10 years. The document provides an overview of OpenPhase capabilities and includes an example of using it to simulate Mg-Al alloy solidification, showing the effect of cooling rate on microstructure. It also gives details about setting up and running a simulation using the OpenPhase modules in C++.
Compressive Data Gathering using NACS in Wireless Sensor NetworkIRJET Journal
The document proposes a Neighbor-Aided Compressive Sensing (NACS) scheme for efficient data gathering in wireless sensor networks. NACS exploits both spatial and temporal correlations in sensor data to reduce data transmissions compared to existing compressive sensing models like Kronecker Compressive Sensing (KCS) and Structured Random Matrix (SRM). In NACS, each sensor node sends its raw sensor readings to a uniquely selected nearest neighbor node, which then applies compressive sensing measurements and sends the compressed data to the sink node. Simulation results show NACS achieves better data recovery performance using fewer transmissions than KCS and SRM, improving energy efficiency for data gathering in wireless sensor networks.
MIMO System Performance Evaluation for High Data Rate Wireless Networks usin...IJMER
Space–time block coding is used for data communication in fading channels by multiple
transmit antennas. Message data is encoded by applying a space–time block code and after the encoding
the data is break into ‘n’ streams of simultaneously transmitted strings through n transmit antennas. The
received signal at the receiver end is the superposition of the n transmitted signals distorted due to noise
.For data recovery maximum likelihood decoding scheme is applied through decoupling of the signals
transmitted from different antennas instead of joint detection. The maximum likelihood decoding scheme
applies the orthogonal structure of the space–time block code (OSTBC) and gives a maximum-likelihood
decoding algorithm based on linear processing at the receiver. In this paper orthogonal space–time
block codes based model is developed using Matlab/Simulink to get the maximum diversity order for a
given number of transmit and receive antennas subject with a simple decoding algorithm.
The simulink block of orthogonal space coding block with space–time block codes is applied with and
without gray coding. The OSTBC codes gives the maximum possible transmission rate for any number of
transmit antennas using any arbitrary real constellation such of M-PSK array. For different complex
constellation of M- PSK space–time block codes are applied that achieve 1/2 and 3/4 of the maximum
possible transmission rate for MIMO transmit antennas using different complex constellations.
This document summarizes a research paper that proposes a Virtual Backbone Scheduling technique with clustering and fuzzy logic for faster data collection in wireless sensor networks. It introduces the concepts of virtual backbone scheduling, clustering, and fuzzy logic. It presents the system architecture that uses these techniques and includes three clusters with sensor nodes, cluster heads, and a common sink node. Algorithms for virtual backbone scheduling and fuzzy-based clustering are described. Implementation results show that the proposed approach improves network lifetime, reduces error rates, lowers communication costs, and decreases scheduling time compared to existing techniques like TDMA scheduling.
This document describes a testbed for image synthesis developed at Cornell University. The testbed was designed to facilitate research on new light reflection models, global illumination algorithms, and rendering of complex scenes. It uses a modular structure with hierarchical levels of functionality. The lowest level contains utility modules, the middle level contains object modules that work across primitive types, and the highest level contains image synthesis modules. The testbed uses a modeler-independent description format to represent environments independently of modeling programs. Renderers can then generate images from this common description.
1) The document presents a technique for automatically correcting for ion travel time when mass calibrating a single quadrupole mass spectrometer. This allows a single calibration to be used over all mass ranges and scan speeds.
2) By deriving an equation for ion transmission time as a function of mass and scan speed, the mass shift due to varying scan speeds can be calculated and subtracted from acquired data.
3) Empirical testing showed the technique reduced initial mass shifts by at least 85% for all masses and scan speeds, with no residual shift over 0.1 m/z. The combined calibration provides effective mass calibration across an instrument's operating ranges.
Projected Barzilai-Borwein Methods Applied to Distributed Compressive Spectru...Polytechnique Montreal
Cognitive radio allows unlicensed (cognitive) users to use licensed frequency bands by exploiting spectrum sensing techniques to detect whether or not the licensed (primary) users are present. In this paper, we present a compressed sensing applied to spectrum-occupancy detection in wide-band applications. The collected analog signals from each cognitive radio (CR) receiver at a fusion center are transformed to discrete-time signals by using analog-to-information converter (AIC) and then employed to calculate the autocorrelation. For signal reconstruction, we exploit a novel approach to solve the optimization problem consisting of minimizing both a quadratic (l2) error term and an l1-regularization term. In specific, we propose the Basic gradient projection (GP) and projected Barzilai-Borwein (PBB) algorithm to offer a better performance in terms of the mean squared error of the power spectrum density estimate and the detection probability of licensed signal occupancy.
This document proposes a holistic approach to reconstruct data in ocean sensor networks using compression sensing. It involves two key aspects:
1) A node reordering scheme is developed to improve the sparsity of signals in the discrete cosine transform or Fourier transform domain, reducing the number of measurements needed for accurate reconstruction.
2) An improved sparse adaptive tracking algorithm is adopted to estimate the sparse degree and then reconstruct the signal in a step-by-step manner, gradually converging on an accurate reconstruction even with unknown sparsity.
Simulation results show the proposed method can effectively reduce signal sparsity and accurately reconstruct signals, especially in cases of unknown sparsity.
Multi-phase-field simulations with OpenPhasePFHub PFHub
The document describes OpenPhase, an open-source phase field modeling toolbox for simulating microstructure evolution. OpenPhase uses a multi-phase field approach and includes modules for simulating processes like coarsening, diffusion, deformation, plasticity, damage, and fluid flow. It has been under development for over 10 years. The document provides an overview of OpenPhase capabilities and includes an example of using it to simulate Mg-Al alloy solidification, showing the effect of cooling rate on microstructure. It also gives details about setting up and running a simulation using the OpenPhase modules in C++.
Compressive Data Gathering using NACS in Wireless Sensor NetworkIRJET Journal
The document proposes a Neighbor-Aided Compressive Sensing (NACS) scheme for efficient data gathering in wireless sensor networks. NACS exploits both spatial and temporal correlations in sensor data to reduce data transmissions compared to existing compressive sensing models like Kronecker Compressive Sensing (KCS) and Structured Random Matrix (SRM). In NACS, each sensor node sends its raw sensor readings to a uniquely selected nearest neighbor node, which then applies compressive sensing measurements and sends the compressed data to the sink node. Simulation results show NACS achieves better data recovery performance using fewer transmissions than KCS and SRM, improving energy efficiency for data gathering in wireless sensor networks.
MIMO System Performance Evaluation for High Data Rate Wireless Networks usin...IJMER
Space–time block coding is used for data communication in fading channels by multiple
transmit antennas. Message data is encoded by applying a space–time block code and after the encoding
the data is break into ‘n’ streams of simultaneously transmitted strings through n transmit antennas. The
received signal at the receiver end is the superposition of the n transmitted signals distorted due to noise
.For data recovery maximum likelihood decoding scheme is applied through decoupling of the signals
transmitted from different antennas instead of joint detection. The maximum likelihood decoding scheme
applies the orthogonal structure of the space–time block code (OSTBC) and gives a maximum-likelihood
decoding algorithm based on linear processing at the receiver. In this paper orthogonal space–time
block codes based model is developed using Matlab/Simulink to get the maximum diversity order for a
given number of transmit and receive antennas subject with a simple decoding algorithm.
The simulink block of orthogonal space coding block with space–time block codes is applied with and
without gray coding. The OSTBC codes gives the maximum possible transmission rate for any number of
transmit antennas using any arbitrary real constellation such of M-PSK array. For different complex
constellation of M- PSK space–time block codes are applied that achieve 1/2 and 3/4 of the maximum
possible transmission rate for MIMO transmit antennas using different complex constellations.
This document summarizes a research paper that proposes a Virtual Backbone Scheduling technique with clustering and fuzzy logic for faster data collection in wireless sensor networks. It introduces the concepts of virtual backbone scheduling, clustering, and fuzzy logic. It presents the system architecture that uses these techniques and includes three clusters with sensor nodes, cluster heads, and a common sink node. Algorithms for virtual backbone scheduling and fuzzy-based clustering are described. Implementation results show that the proposed approach improves network lifetime, reduces error rates, lowers communication costs, and decreases scheduling time compared to existing techniques like TDMA scheduling.
This document describes a testbed for image synthesis developed at Cornell University. The testbed was designed to facilitate research on new light reflection models, global illumination algorithms, and rendering of complex scenes. It uses a modular structure with hierarchical levels of functionality. The lowest level contains utility modules, the middle level contains object modules that work across primitive types, and the highest level contains image synthesis modules. The testbed uses a modeler-independent description format to represent environments independently of modeling programs. Renderers can then generate images from this common description.
1) The document presents a technique for automatically correcting for ion travel time when mass calibrating a single quadrupole mass spectrometer. This allows a single calibration to be used over all mass ranges and scan speeds.
2) By deriving an equation for ion transmission time as a function of mass and scan speed, the mass shift due to varying scan speeds can be calculated and subtracted from acquired data.
3) Empirical testing showed the technique reduced initial mass shifts by at least 85% for all masses and scan speeds, with no residual shift over 0.1 m/z. The combined calibration provides effective mass calibration across an instrument's operating ranges.
Design and Fabrication of a Two Axis Parabolic Solar Dish CollectorIJERA Editor
The work consists of the design of the chain drive system and the fabrication of the two axis parabolic solar dish.
It is a model study of the two axis parabolic dish which worked by the automatic circuit that was developed. Ready
made parabolic solar dish is taken and fabricated. The circular iron ring provides the two axis motion of the dish.
A compound chain drive system was developed for the smooth movement of the dish. An electromechanical
system which tracks the sun on both axes and which is controlled via a programmable logic control (PLC) was
designed and implemented. In this a theoretical study was done. A C program was made which gave the required
result for the graphical representation of the recorded radiation. Programmable Logic Controls (PLC) was used
instead of photo sensors, which are widely used for tracking the sun. The azimuthal angle of the sun from sunrise
to sunset times was calculated for each day of the year at 23.59 Lat & 72.38Longitude in the Northern hemisphere,
the location of the city Mehsana. According to this azimuth angle, the required analog signal was taken from the
PLC analog module and sent to the power window motor, which controlled the position of the panel to ensure that
the rays fall vertically on the panel. After the mechanical control of the system was started, the performance
measurements of the solar panel were carried out. The values obtained from the measurements were compared and
the necessary evaluations were conducted.
This document analyzes the performance of the first-fit wavelength assignment algorithm in optical networks. It proposes a new analytical technique to calculate the blocking probability of a source-destination pair taking into account wavelength correlation and load correlation between links. The model is accurate even with a large number of wavelengths. It first calculates the probabilities that wavelengths are used on individual links and paths. Then it establishes a wavelength correlation model to calculate the blocking probability on a given path based on the wavelength usage probabilities of each link along the path.
A Transmission Range Based Clustering Algorithm for Topology Control Manetgraphhoc
This paper presents a novel algorithm for clustering of nodes by transmission range based clustering (TRBC).This algorithm does topology management by the usage of coverage area of each node and power management based on mean transmission power within the context of wireless ad-hoc networks. By reducing the transmission range of the nodes, energy consumed by each node is decreased and topology is formed. A new algorithm is formulated that helps in reducing the system power consumption and prolonging the battery life of mobile nodes. Formation of cluster and selection of optimal cluster head and thus forming the optimal cluster taking weighted metrics like battery life, distance, position and mobility is done based on the factors such as node density, coverage area, contention index, required and current node degree of the nodes in the clusters
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Latest 2016 IEEE Projects | 2016 Final Year Project Titles - 1 Crore Projects1crore projects
IEEE PROJECTS 2016 - 2017
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering,
2. IEEE based on mobile computing,
3. IEEE based on networking,
4. IEEE based on Image processing,
5. IEEE based on Multimedia,
6. IEEE based on Network security,
7. IEEE based on parallel and distributed systems
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2016
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
5. IOT Projects
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US:-
1 CRORE PROJECTS
Door No: 66 ,Ground Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 7708150152
Emulation OF 3gpp Scme CHANNEL MODELS USING A Reverberation Chamber MEASUREME...IJERA Editor
1) A test bed was developed to validate 3GPP SCME channel models using a reverberation chamber. Power delay profiles were measured for urban micro and macro channel models and matched well with theoretical profiles.
2) The reverberation chamber was able to control delay spread by adding absorbing materials, allowing different channel models to be emulated. Measurements showed Rayleigh fading was maintained with losses.
3) Convolution of signals with 3GPP channel model taps allowed emulation of multi-cluster channels. Measurements found emulated profiles matched theoretical profiles specified in 3GPP standards.
Nonlinear filtering approaches to field mapping by sampling using mobile sensorsijassn
This work proposes a novel application of existing powerful nonlinear filters, such as the standard
Extended Kalman Filter (EKF), some of its variants and the standard Unscented Kalman Filter (UKF), to
the estimation of a continuous spatio-temporal field that is spread over a wide area, and hence represented
by a large number of parameters when parameterized. We couple these filters with the powerful scheme of
adaptive sampling performed by a single mobile sensor, and investigate their performances with a view to
significantly improving the speed and accuracy of the overall field estimation. An extensive simulation work
was carried out to show that different variants of the standard EKF and the standard UKF can be used to
improve the accuracy of the field estimate. This paper also aims to provide some guideline for the user of
these filters in reaching a practical trade-off between the desired field estimation accuracy and the
required computational load.
Energy aware model for sensor network a nature inspired algorithm approachijdms
In this paper we are proposing to develop energy aware model for sensor network. In our approach, first
we used DBSCAN clustering technique to exploit the spatiotemporal correlation among the sensors, then
we identified subset of sensors called representative sensors which represent the entire network state. And
finally we used nature inspired algorithms such as Ant Colony Optimization, Bees Colony Optimization,
and Simulated Annealing to find the optimal transmission path for data transmission. We have conducted
our experiment on publicly available Intel Berkeley Research Lab dataset and the experimental results
shows that consumption of energy can be reduced.
IRJET- Performance Analysis of Energy Efficient Clustering Protocol using TAB...IRJET Journal
This document summarizes a research paper that proposes an improved routing protocol for wireless sensor networks (WSNs) using a hybrid Tabu-PSO technique. It begins with background on WSNs and discusses the General Self-organized Tree-based Energy-balance Routing Protocol (GSTEB). It then introduces a hybrid Tabu-PSO algorithm to optimize routing and cluster head selection in GSTEB in order to overcome its limitations and improve energy efficiency. Simulation results show that the proposed technique outperforms existing methods in terms of reducing packet rate, end-to-end delay, and prolonging network lifetime.
This study analyzed spike train data recorded from neurons in the dorsolateral prefrontal cortex (DLPFC) of a monkey performing a working memory task. Spike train distance metrics were applied to quantify how information about the task was encoded temporally. Optimal parameters were identified for single-unit and multi-unit analyses. Information encoding was found to vary across time intervals of the task, with some neuron pairs showing higher information at different times. Visualizations using t-SNE helped demonstrate that target location could be decoded from spike train distances. The study helps quantify temporal encoding in the DLPFC during working memory tasks.
This document summarizes an energy efficient clustering algorithm proposed for wireless sensor networks. It discusses the objectives, existing system, proposed system, simulation results and conclusions. The existing system uses a distributed self-organization balanced clustering algorithm (DSBCA) that has uniform cluster sizes and issues with node dropout. The proposed energy efficient clustering algorithm (EECA) forms unequal cluster sizes based on average neighbor energy and selects cluster heads through uneven competition ranges. Simulation results show the heterogeneous EECA provides longer network lifetime, higher efficiency and throughput than the homogeneous EECA.
A FAST FAULT TOLERANT PARTITIONING ALGORITHM FOR WIRELESS SENSOR NETWORKScsandit
The document describes a distributed algorithm for partitioning wireless sensor networks into connected partitions to maximize network lifetime. The algorithm finds the maximum number of partitions where each partition is connected and covers the monitoring area. It does this efficiently with less computation time and message overhead compared to previous works. The algorithm also includes a distributed fault recovery method that can locally rearrange an affected partition to tolerate single node failures and extend network lifetime further. Simulation results show the partitioning algorithm is faster and creates better topology partitions, while the fault recovery enhances lifetime by over 50%.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This document provides contact information for VENSOFT Technologies and describes 25 MATLAB projects for the 2013-2014 academic year related to signal processing topics such as phase noise estimation in MIMO systems, distributed averaging algorithms, channel estimation, computation of the moment generating function for lognormal distributions, compressed sensing of EEG data, and compressed sensing for wireless monitoring of fetal ECG signals. The contact for projects is provided as VENSOFT Technologies, their website, and a phone number.
The document discusses using terahertz radiation to characterize electronic components. It describes the experimental setup for terahertz imaging in both transmission and reflection modes. Key applications discussed include using terahertz techniques to determine the refractive index and absorption coefficient of materials, which can be used to distinguish authentic integrated circuits from counterfeits. The document also shows how terahertz imaging can identify features like different layers within objects and blacktopped integrated circuits that are difficult to detect using other methods like x-rays.
Performance Evaluation of Consumed Energy-Type-Aware Routing (CETAR) For Wire...ijwmn
This document evaluates the performance of Consumed-Energy-Type-Aware Routing (CETAR) for wireless sensor networks. CETAR makes routing decisions based on statistics of the energy consumed for different node activities like sensing, transmitting, and routing. It aims to encourage nodes that are not often data sources to serve as routing nodes, in order to preserve the energy of active source nodes and prolong network lifetime. Simulation results show that CETAR can significantly extend the lifetime of routing protocols like Geographic and Energy Aware Routing (GEAR) by taking each node's energy consumption patterns into account.
Jennifer Shoemaker is a results-driven sales professional with over 10 years of experience in new home sales, luxury watch sales, and retail sales. She has a proven track record of consistently meeting and exceeding sales goals. She is currently working as a new home sales professional for Irvine Pacific, where her responsibilities include leading sales presentations, qualifying prospects, giving property tours, and assisting with the close of escrow process. Prior to this, she worked in luxury watch sales for Omega and Tourneau, developing loyal clientele and becoming a member of the Million Dollar Club for 7 consecutive years.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
El documento habla sobre la Deep Web y las medidas de seguridad para navegar en ella. La Deep Web es la parte de Internet no indexada por los motores de búsqueda y representa el 96% del contenido total de Internet. Contiene tanto información legal como ilegal oculta en sitios .onion accesibles solo a través de navegadores especiales y redes como Tor para mantener el anonimato. Se recomienda tomar precauciones como usar una PC y red separadas, encriptar la IP y tener actualizados antivirus y firewall antes de explorar la Deep Web.
Design and Fabrication of a Two Axis Parabolic Solar Dish CollectorIJERA Editor
The work consists of the design of the chain drive system and the fabrication of the two axis parabolic solar dish.
It is a model study of the two axis parabolic dish which worked by the automatic circuit that was developed. Ready
made parabolic solar dish is taken and fabricated. The circular iron ring provides the two axis motion of the dish.
A compound chain drive system was developed for the smooth movement of the dish. An electromechanical
system which tracks the sun on both axes and which is controlled via a programmable logic control (PLC) was
designed and implemented. In this a theoretical study was done. A C program was made which gave the required
result for the graphical representation of the recorded radiation. Programmable Logic Controls (PLC) was used
instead of photo sensors, which are widely used for tracking the sun. The azimuthal angle of the sun from sunrise
to sunset times was calculated for each day of the year at 23.59 Lat & 72.38Longitude in the Northern hemisphere,
the location of the city Mehsana. According to this azimuth angle, the required analog signal was taken from the
PLC analog module and sent to the power window motor, which controlled the position of the panel to ensure that
the rays fall vertically on the panel. After the mechanical control of the system was started, the performance
measurements of the solar panel were carried out. The values obtained from the measurements were compared and
the necessary evaluations were conducted.
This document analyzes the performance of the first-fit wavelength assignment algorithm in optical networks. It proposes a new analytical technique to calculate the blocking probability of a source-destination pair taking into account wavelength correlation and load correlation between links. The model is accurate even with a large number of wavelengths. It first calculates the probabilities that wavelengths are used on individual links and paths. Then it establishes a wavelength correlation model to calculate the blocking probability on a given path based on the wavelength usage probabilities of each link along the path.
A Transmission Range Based Clustering Algorithm for Topology Control Manetgraphhoc
This paper presents a novel algorithm for clustering of nodes by transmission range based clustering (TRBC).This algorithm does topology management by the usage of coverage area of each node and power management based on mean transmission power within the context of wireless ad-hoc networks. By reducing the transmission range of the nodes, energy consumed by each node is decreased and topology is formed. A new algorithm is formulated that helps in reducing the system power consumption and prolonging the battery life of mobile nodes. Formation of cluster and selection of optimal cluster head and thus forming the optimal cluster taking weighted metrics like battery life, distance, position and mobility is done based on the factors such as node density, coverage area, contention index, required and current node degree of the nodes in the clusters
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Latest 2016 IEEE Projects | 2016 Final Year Project Titles - 1 Crore Projects1crore projects
IEEE PROJECTS 2016 - 2017
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering,
2. IEEE based on mobile computing,
3. IEEE based on networking,
4. IEEE based on Image processing,
5. IEEE based on Multimedia,
6. IEEE based on Network security,
7. IEEE based on parallel and distributed systems
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2016
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
5. IOT Projects
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US:-
1 CRORE PROJECTS
Door No: 66 ,Ground Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 7708150152
Emulation OF 3gpp Scme CHANNEL MODELS USING A Reverberation Chamber MEASUREME...IJERA Editor
1) A test bed was developed to validate 3GPP SCME channel models using a reverberation chamber. Power delay profiles were measured for urban micro and macro channel models and matched well with theoretical profiles.
2) The reverberation chamber was able to control delay spread by adding absorbing materials, allowing different channel models to be emulated. Measurements showed Rayleigh fading was maintained with losses.
3) Convolution of signals with 3GPP channel model taps allowed emulation of multi-cluster channels. Measurements found emulated profiles matched theoretical profiles specified in 3GPP standards.
Nonlinear filtering approaches to field mapping by sampling using mobile sensorsijassn
This work proposes a novel application of existing powerful nonlinear filters, such as the standard
Extended Kalman Filter (EKF), some of its variants and the standard Unscented Kalman Filter (UKF), to
the estimation of a continuous spatio-temporal field that is spread over a wide area, and hence represented
by a large number of parameters when parameterized. We couple these filters with the powerful scheme of
adaptive sampling performed by a single mobile sensor, and investigate their performances with a view to
significantly improving the speed and accuracy of the overall field estimation. An extensive simulation work
was carried out to show that different variants of the standard EKF and the standard UKF can be used to
improve the accuracy of the field estimate. This paper also aims to provide some guideline for the user of
these filters in reaching a practical trade-off between the desired field estimation accuracy and the
required computational load.
Energy aware model for sensor network a nature inspired algorithm approachijdms
In this paper we are proposing to develop energy aware model for sensor network. In our approach, first
we used DBSCAN clustering technique to exploit the spatiotemporal correlation among the sensors, then
we identified subset of sensors called representative sensors which represent the entire network state. And
finally we used nature inspired algorithms such as Ant Colony Optimization, Bees Colony Optimization,
and Simulated Annealing to find the optimal transmission path for data transmission. We have conducted
our experiment on publicly available Intel Berkeley Research Lab dataset and the experimental results
shows that consumption of energy can be reduced.
IRJET- Performance Analysis of Energy Efficient Clustering Protocol using TAB...IRJET Journal
This document summarizes a research paper that proposes an improved routing protocol for wireless sensor networks (WSNs) using a hybrid Tabu-PSO technique. It begins with background on WSNs and discusses the General Self-organized Tree-based Energy-balance Routing Protocol (GSTEB). It then introduces a hybrid Tabu-PSO algorithm to optimize routing and cluster head selection in GSTEB in order to overcome its limitations and improve energy efficiency. Simulation results show that the proposed technique outperforms existing methods in terms of reducing packet rate, end-to-end delay, and prolonging network lifetime.
This study analyzed spike train data recorded from neurons in the dorsolateral prefrontal cortex (DLPFC) of a monkey performing a working memory task. Spike train distance metrics were applied to quantify how information about the task was encoded temporally. Optimal parameters were identified for single-unit and multi-unit analyses. Information encoding was found to vary across time intervals of the task, with some neuron pairs showing higher information at different times. Visualizations using t-SNE helped demonstrate that target location could be decoded from spike train distances. The study helps quantify temporal encoding in the DLPFC during working memory tasks.
This document summarizes an energy efficient clustering algorithm proposed for wireless sensor networks. It discusses the objectives, existing system, proposed system, simulation results and conclusions. The existing system uses a distributed self-organization balanced clustering algorithm (DSBCA) that has uniform cluster sizes and issues with node dropout. The proposed energy efficient clustering algorithm (EECA) forms unequal cluster sizes based on average neighbor energy and selects cluster heads through uneven competition ranges. Simulation results show the heterogeneous EECA provides longer network lifetime, higher efficiency and throughput than the homogeneous EECA.
A FAST FAULT TOLERANT PARTITIONING ALGORITHM FOR WIRELESS SENSOR NETWORKScsandit
The document describes a distributed algorithm for partitioning wireless sensor networks into connected partitions to maximize network lifetime. The algorithm finds the maximum number of partitions where each partition is connected and covers the monitoring area. It does this efficiently with less computation time and message overhead compared to previous works. The algorithm also includes a distributed fault recovery method that can locally rearrange an affected partition to tolerate single node failures and extend network lifetime further. Simulation results show the partitioning algorithm is faster and creates better topology partitions, while the fault recovery enhances lifetime by over 50%.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This document provides contact information for VENSOFT Technologies and describes 25 MATLAB projects for the 2013-2014 academic year related to signal processing topics such as phase noise estimation in MIMO systems, distributed averaging algorithms, channel estimation, computation of the moment generating function for lognormal distributions, compressed sensing of EEG data, and compressed sensing for wireless monitoring of fetal ECG signals. The contact for projects is provided as VENSOFT Technologies, their website, and a phone number.
The document discusses using terahertz radiation to characterize electronic components. It describes the experimental setup for terahertz imaging in both transmission and reflection modes. Key applications discussed include using terahertz techniques to determine the refractive index and absorption coefficient of materials, which can be used to distinguish authentic integrated circuits from counterfeits. The document also shows how terahertz imaging can identify features like different layers within objects and blacktopped integrated circuits that are difficult to detect using other methods like x-rays.
Performance Evaluation of Consumed Energy-Type-Aware Routing (CETAR) For Wire...ijwmn
This document evaluates the performance of Consumed-Energy-Type-Aware Routing (CETAR) for wireless sensor networks. CETAR makes routing decisions based on statistics of the energy consumed for different node activities like sensing, transmitting, and routing. It aims to encourage nodes that are not often data sources to serve as routing nodes, in order to preserve the energy of active source nodes and prolong network lifetime. Simulation results show that CETAR can significantly extend the lifetime of routing protocols like Geographic and Energy Aware Routing (GEAR) by taking each node's energy consumption patterns into account.
Jennifer Shoemaker is a results-driven sales professional with over 10 years of experience in new home sales, luxury watch sales, and retail sales. She has a proven track record of consistently meeting and exceeding sales goals. She is currently working as a new home sales professional for Irvine Pacific, where her responsibilities include leading sales presentations, qualifying prospects, giving property tours, and assisting with the close of escrow process. Prior to this, she worked in luxury watch sales for Omega and Tourneau, developing loyal clientele and becoming a member of the Million Dollar Club for 7 consecutive years.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
El documento habla sobre la Deep Web y las medidas de seguridad para navegar en ella. La Deep Web es la parte de Internet no indexada por los motores de búsqueda y representa el 96% del contenido total de Internet. Contiene tanto información legal como ilegal oculta en sitios .onion accesibles solo a través de navegadores especiales y redes como Tor para mantener el anonimato. Se recomienda tomar precauciones como usar una PC y red separadas, encriptar la IP y tener actualizados antivirus y firewall antes de explorar la Deep Web.
Mohammed Siraj Uddin is seeking a networking role and has over 5 years of experience in network administration and maintenance. He has expertise in Windows networks, Cisco routers and switches, and experience setting up LAN and WAN environments. He holds a Bachelor's degree in IT and certifications in CCNA and MCITP. His previous roles include network engineer and computer technician positions where he configured and troubleshot networks and servers.
The document describes a STEM freshman outreach program called STEM FRESH that was created to increase retention of engineering students at The Citadel. Over the 2014-2015 academic year, the program held 7 events including activities with industry mentors, faculty presentations, and social events. Attendance data found that retention rates increased with more events attended, up to 100% for students attending all 7 events. Participants also maintained a average GPA of 2.8 or higher. Based on the success of increasing retention and academic performance, the program will continue and expand to include sophomore initiatives.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
The passage discusses the importance of summarization in an age of information overload. It notes that with the massive amounts of data available online, being able to quickly understand the key points of lengthy documents, articles, or reports is crucial. The ability to produce clear, concise summaries helps people filter through large amounts of information and identify what is most important or relevant to them.
This document provides an overview of a digital marketing training course offered by Webtrainings Academy. The course covers topics such as digital marketing basics, search engine optimization, social media marketing, Google AdWords, and Google Analytics. It is 55-60 hours in duration and costs Rs 15,000. The training includes both theoretical concepts and hands-on experience through exercises and projects. Webtrainings Academy offers the course with the goal of providing 7+ years of excellence in training and development.
Millennials are set to pass Boomers this year, and that creates huge opportunities for financial institutions. In this deck, we explore who Millennials are, and why makes them different as consumers. We'll examine what's important to them, how they shop, and marketing techniques you can employ to stay top of mind.
This past July the CVB launched their #VisitCool promotion with interactive frozen displays placed throughout the Valley. In conjunction with this, the Flagstaff Cool Zone was installed and relaunched for a second year, promoting Flagstaff as a year-round travel destination.
The document outlines a person's daily routine, which starts at 3:40 AM with waking up and having a shower. They then get dressed, make their bed, and have breakfast, followed by brushing their teeth and combing their hair. They go out at 5 AM and have lunch at 12:30 PM and dinner at 8:10 PM before watching TV and going to bed at 9 PM.
EffectiveOcclusion Handling for Fast Correlation Filter-based TrackersEECJOURNAL
Correlation filter-based trackers heavily suffer from the problem of multiple peaks in their response maps incurred by occlusions. Moreover, the whole tracking pipeline may break down due to the uncertainties brought by shifting among peaks, which will further lead to the degraded correlation filter model. To alleviate the drift problem caused by occlusions, we propose a novel scheme to choose the specific filter model according to different scenarios. Specifically, an effective measurement function is designed to evaluate the quality of filter response. A sophisticated strategy is employed to judge whether occlusions occur, and then decide how to update the filter models. In addition, we take advantage of both log-polar method and pyramid-like approach to estimate the best scale of the target. We evaluate our proposed approach on VOT2018 challenge and OTB100 dataset, whose experimental result shows that the proposed tracker achieves the promising performance compared against the state-of-the-art trackers.
EMBC'13 Poster Presentation on "A Bio-Inspired Cooperative Algorithm for Dist...Md Kafiul Islam
The document proposes an algorithm for distributed optimization with mobile nodes that do not know the cost function beforehand. Each node estimates the gradient vector to update its location. The proposed algorithm improves upon an existing algorithm by relying on information-rich nodes in the neighborhood instead of a linear combination of neighbors' estimates. It also uses a variable step size to increase the probability of finding information-rich nodes early in the iterations. Simulation results show the proposed algorithm achieves better performance than the existing algorithm and a non-cooperative scheme. The algorithm has applications in sensor networks, environmental monitoring, and other domains.
Comparative study between metaheuristic algorithms for internet of things wir...IJECEIAES
Wireless networks are currently used in a wide range of healthcare, military, or environmental applications. Wireless networks contain many nodes and sensors that have many limitations, including limited power, limited processing, and narrow range. Therefore, determining the coordinates of the location of a node of the unknown location at a low cost and a limited treatment is one of the most important challenges facing this field. There are many meta-heuristic algorithms that help in identifying unknown nodes for some known nodes. In this manuscript, hybrid metaheuristic optimization algorithms such as grey wolf optimization and salp swarm algorithm are used to solve localization problem of internet of things (IoT) sensors. Several experiments are conducted on every meta-heuristic optimization algorithm to compare them with the proposed method. The proposed algorithm achieved high accuracy with low error rate (0.001) and low power consumption.
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networksijsrd.com
Efficiency in a Wireless Sensor Network can only be obtained with effective routing mechanisms. This paper uses Particle Swarm Optimization (PSO, a metaheuristic algorithm to perform the process of routing. Since PSO does not have a defined fitness function, it is flexible to incorporate user defined QoS parameters to define the fitness function.
Ieee transactions 2018 topics on wireless communications for final year stude...tsysglobalsolutions
This document contains summaries of several academic papers related to wireless communications and signal processing. The summaries are 3 sentences or less and provide the high level purpose and key findings of each paper. The papers cover topics like content placement in cache-enabled small cell networks, joint beamformer design for wireless fronthaul and access links, long-term power procurement scheduling for smart grids, and frequency-domain compressive channel estimation for hybrid mmWave MIMO systems among others.
Congestion Control in Manets Using Hybrid Routing ProtocolIOSR Journals
As the network size increases the probability of congestion occurrence at nodes increases. This is
because of the event driven nature of ad hoc networks that leads to unpredictable network load. As a result
congestion may occur at the nodes which receive more data than that can be forwarded and cause packet losses.
In this paper we propose a hybrid scheme that attempts to avoid packet loss due to congestion as well as reduce
end to end delay in delivering data packets by combining two protocols- Destination sequenced distance vector
routing (DSDV), which is a table driven or proactive protocol and Improved Ad-hoc on demand vector routing
(IAODV) which is an on-demand or reactive protocol that reduces packet loss due to congestion. The strategy
adopted is use DSDV for path selection and if congestion occurs than switch over to IAODV. The routing
performance of this model is then compared with IAODV and DSDV in terms of end to end delay, throughput
and packet delivery fraction
Congestion Control in Manets Using Hybrid Routing ProtocolIOSR Journals
1. The document proposes a hybrid routing protocol that combines DSDV and IAODV to reduce packet loss due to congestion in MANETs.
2. Under the proposed scheme, DSDV is used initially for path selection. If congestion occurs, nodes switch to using IAODV to find an alternate path to avoid congested areas.
3. Simulation results show that the hybrid protocol improves end-to-end delay, packet delivery fraction, and throughput compared to using only DSDV or IAODV. The hybrid approach balances the advantages of proactive and reactive routing to better handle congestion in mobile ad hoc networks.
Data mining projects topics for java and dot netredpel dot com
This document discusses several papers related to data mining and machine learning techniques. It begins with a brief summary of each paper, discussing the key contributions and findings. The summaries cover topics such as differential privacy-preserving data anonymization, fault detection in power systems using decision trees, temporal pattern searching in event data, high dimensional indexing for similarity search, landmark-based approximate shortest path computation, feature selection for high dimensional data, temporal pattern mining in data streams, data leakage detection, keyword search in spatial databases, analyzing relationships on Wikipedia, improving recommender systems using user-item subgroups, decision trees for uncertain data, and building confidential query services in the cloud using data perturbation.
A Robust Topology Control Solution For The Sink Placement Problem In WSNsJim Webb
This document proposes a topology control solution using discrete particle swarm optimization (DPSO) with local search to solve the sink placement problem in wireless sensor networks (WSNs). The goal is to minimize the maximum worst case delay in the network by determining the optimal number and locations of sinks. Traffic flow analysis (TFA) is used to calculate delay, and the approach is extended to allow its use for multiple sink scenarios. Experiments show the DPSO approach outperforms genetic algorithm-based sink placement (GASP), the state-of-the-art solution, finding better sink locations up to 3 times faster.
Nearest Adjacent Node Discovery Scheme for Routing Protocol in Wireless Senso...IOSR Journals
The broad significance of Wireless Sensor Networks is in most emergency and disaster rescue
domain. The routing process is the main challenges in the wireless sensor network due to lack of physical links.
The objective of routing is to find optimum path which is used to transferring packets from source node to
destination node. Routing should generate feasible routes between nodes and send traffic along the selected path
and also achieve high performance. This paper presents a nearest adjacent node scheme based on shortest path
routing algorithm. It is plays an important role in energy conservation. It finds the best location of nearest
adjacent nodes by involving the least number of nodes in transmission of data and set large number of nodes to
sleep in idle mode. Based on simulation result we shows the significant improvement in energy saving and
enhance the life of the network
This document presents a genetic algorithm approach for solving the Minimum Cost Localization Problem (MCLP) in wireless sensor networks. The MCLP aims to determine the minimum number of beacon nodes needed to localize all other nodes in the network.
The document begins with an introduction to localization in wireless sensor networks and discusses previous work on the MCLP, including greedy algorithms. It then formally defines the MCLP. The document proposes a genetic algorithm to improve upon previous greedy approaches. Simulation results show the genetic algorithm approach improves localization performance over the best existing greedy algorithm by up to 50% in some cases.
Shortest path algorithm for data transmission in wireless ad hoc sensor networksijasuc
Wireless sensor networks determine probable in military, environments, health and commercial
applications. The process of transferring of information from a remote sensor node to other nodes in a
network holds importance for such applications. Various constraints such as limited computation, storage
and power makes the process of transferring of information routing interesting and has opened new arenas
for researchers. The fundamental problem in sensor networks states the significance and routing of
information through a real path as path length decides some basic performance parameters for sensor
networks. This paper strongly focuses on a shortest path algorithm for wireless adhoc networks. The
simulations are performed on NS2 and the results obtained discuss the role of transferring of information
through a shortest path.
Ieee transactions 2018 on wireless communications Title and Abstracttsysglobalsolutions
Final year BE, B.Tech, ME, M.Tech projects along with our professionals for developing Real Time Applications in Emerging Technologies.
We can support to your final year projects in all domains with latest technologies and simulation tool like NS2, NS3, Glomosim, Opnet, Matlab, IDL, Sumo, Gridsim, Bonita tool & Cloud deployments (Cloudsim, Google App Engine, Amazon Deployment, and Real time Cloud Deployment)also we are support for JOURNAL and CONFERENCE Preparation.
?
TSYS Research and Development.
20/9 Sardar Patel Road, Janaki Complex,
4th Floor, Adyar, Chennai - 20?
Tel : 91 44 42607879, 98411 03123.
Website : http://www.tsysglobalsolutions.com/
http://tsysphdsupport.com/
IRJET- Sink Mobility based Energy Efficient Routing Protocol for Wireless Sen...IRJET Journal
The document describes a proposed sink mobility based energy efficient routing protocol for wireless sensor networks. The protocol uses both a static centralized sink and a mobile sink that follows a predetermined path with 4 sojourn locations. This is aimed to improve network lifetime by balancing energy load across nodes. Simulation results show that the proposed approach with a mobile sink performs better than the Threshold sensitive Energy Efficient sensor Network (TEEN) protocol alone in terms of number of alive nodes, number of cluster heads, and number of packets sent to the base station over multiple rounds. Using a mobile sink helps scatter the energy load in the network and extends lifetime compared to only using a static sink.
IGeekS Technologies is a company located in Bangalore, India. We have being recognized as a quality provider of hardware and software solutions for the student’s in order carry out their academic Projects. We offer academic projects at various academic levels ranging from graduates to masters (Diploma, BCA, BE, M. Tech, MCA, M. Sc (CS/IT)). As a part of the development training, we offer Projects in Embedded Systems & Software to the Engineering College students in all major disciplines.
MANET environment was represented by a combination of node position, mobility speed, node type, and
number of nodes. In this paper, a novel system for MANET environment evaluation is proposed by
involving fuzzy multi-criteria decision maker (FMCDM) to reflect the importance of the MANET
environment on the overall protocols performance. The proposed system combined with another system
that previously suggested for MANET protocol evaluation. the outputs of these systems are merged to
produce one single crisp value in interval [0 1]. Then, a case study for an office is implemented using
OPNET 14.5 simulator to test the proposed system. results proved that MANET environment could be used
to enhance the QoS of the protocol. in another world, factors along with inherent characteristics of Ad-hoc
networks may result in unpredictable variations in the overall network performance.
MANET environment was represented by a combination of node position, mobility speed, node type, and
number of nodes. In this paper, a novel system for MANET environment evaluation is proposed by
involving fuzzy multi-criteria decision maker (FMCDM) to reflect the importance of the MANET
environment on the overall protocols performance. The proposed system combined with another system
that previously suggested for MANET protocol evaluation. the outputs of these systems are merged to
produce one single crisp value in interval [0 1]. Then, a case study for an office is implemented using
OPNET 14.5 simulator to test the proposed system. results proved that MANET environment could be used
to enhance the QoS of the protocol. in another world, factors along with inherent characteristics of Ad-hoc
networks may result in unpredictable variations in the overall network performance.
This document presents an implementation of an ant colony optimization adaptive network-on-chip routing framework using a network information region. The proposed method combines backward ant mechanism with a network information region framework to improve network performance, area efficiency, and reduce congestion. Simulation results show that updating routing tables is faster with the proposed method, leading to improved network performance and area efficiency while reducing congestion compared to other approaches.
This document provides information on several remote sensing projects from IEEE 2015. It lists the titles, languages, and abstracts for 8 projects related to classification and analysis of hyperspectral and multispectral images. The projects focus on techniques such as sparse representation in tangent space, Gabor feature-based collaborative representation, level set evolutions for object extraction, and dimension reduction using spatial and spectral regularization.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Similar to EAMTA_VLSI Architecture Design for Particle Filtering in (20)
EAMTA_VLSI Architecture Design for Particle Filtering in
1. VLSI Architecture Design for Particle Filtering in
Real-time
A. Pasciaroni∗†, J. A. Rodr´ıguez†, F. Masson∗†, P. Juli´an∗†, E. Nebot‡
∗Dep. Ing. El´ectrica y Computadoras, Universidad Nacional del Sur
Av. Alem 1253, Bah´ıa Blanca, Argentina
†CONICET, Argentina
‡Australian Centre for Field Robotics, University of Sydney, Australia
Abstract—Particle Filter is an algorithm that provides system
state estimation even for non-linear and non-gaussian systems.
For applications that require a large number of particles, real
time constraint is hard to accomplish since the algorithm is
computationally expensive and the resampling step becomes a
bottleneck. In this work, a VLSI architecture for particle filtering
in real time is presented. The proposed design implements a
fraction of the processing using piecewise linear functions and
allocates them as global resources. In this way, a large number of
processing elements (PE) working in parallel can be instantiated
in the design. An example based on a range-only localization
using Radio-Frecuency identification (RFID) tags is developed to
illustrate the approach. The received signal strength indicator
(RSSI) is used to estimate the distance between transmitter
and receiver. A VHDL RTL model of the processing data flow
is implemented and compared to Matlab simulations showing
similar results.
Index Terms—Particle Filter, VLSI Design, RFID, RTL.
I. INTRODUCTION
Particle Filters (PF) [1] are a method to perform statistical
dynamic state estimation. The probability density function of
a given state is represented by a set of weighted entities or
particles which is updated iteratively according to sensor mea-
surements and a dynamic system model. The three main steps
of the particle filter are: sampling, update and resampling.
This last step presents high data dependency between particles,
becoming the major bottleneck in the execution time of the
filter.
There exist applications that require real-time estimation of
non-linear and non-gaussian systems as robot localization and
visual tracking [2], [3], [4]. These applications are well suited
for particle filtering but a large number of particles is required
to provide accurate estimations. Since the PF algorithm is
computationally expensive and the resampling step cannot
be fully parallelized, particle filter computation in real time
is limited by the available computational resources. In this
context, a VLSI implementation that exploits algorithm data
level parallelism will allow particle filtering at real time.
Previous works have addressed particle filter implementa-
tions for real time applications [5] [6] [7] [8]. In [5] a PF
architecture composed of multiple processing elements and a
central unit for the bearing-only tracking problem is presented
and implemented in FPGA. Particle filter steps are performed
locally on each processing element (PE). After resampling, a
central unit controls the particle exchange among processors
in order to reduce performance degradation. Several commu-
nication schemes are introduced including a fixed particle
exchange among processors. In [7] a VLSI design of the
processing element is presented which also includes a pipeline
dataflow that deals with logic blocks of variable latency. In [8]
a central unit that performs communication schemes, intro-
duced in [5], for an architecture composed of four processing
elements is designed and a VLSI implementation is presented.
In [6] a parallel pipelined design is presented. The number of
replicated pipeline stages is variable. Taking into account the
rate of each stage an optimal number of replicated stages is
determined. However, a VLSI implementation that takes full
advantage of the data level parallelism present in the algorithm,
has not been developed yet.
In this work a VLSI architecture for particle filtering in real
time applications is presented. It is composed of processing
clusters with one resampling module and an array of PE. Each
PE performs several steps of the PF operation that do not
present data dependency, in a pipelined fashion. Therefore,
if more PE can be instantiated in a given Silicon area, more
particles can be effectively processed in parallel, increasing the
throughput. Afterwards, resampling modules gather PE outputs
so that the resampling is performed in groups. In addition, to
reduce the PE area, a fraction of the PE data processing is
time-multiplexed so hardware dedicated to this processing is
instantiated once and can be shared by multiple PE.
The application chosen to illustrate the approach is target
tracking based on Received Signal Strength Indicator (RSSI)
of Radio Frequency Identification devices (RFID).
The paper is organized as follows. Section II presents the
localization framework and RSSI sensor model. The archi-
tecture and microarchitecture design is presented in section
III. Execution time of proposed architecture is presented in
section IV. Simulation results comparing the VHDL RTL and
Matlab models are presented in section V. Finally, Section VI
is dedicated to the conclusions.
II. LOCALIZATION FRAMEWORK
In sensor networks, Radio Frequency based localization
systems have gained importance in those environments where
Global Positioning based system (GPS) do not perform well
due to poor satellite availability or multiple path issues [9]
[10]. This a possible situation for the choosen target appli-
cation: trucks localization in opencast mining enviroments
2. 0 10 20 30 40 50 60
−150
−100
−50
0
50
Two Ray Model
Distance [m]
AverageSignalStrength[dBm]
Fig. 1: Two Ray Model for a communication link of 433 Mhz in a
rural enviroment.
[9]. The RFID technology comprises the receivers, antennas
and RFID tags. The tags send their identification number to
the receivers. Making use of RSSI it is possible to estimate
the distance between a tag and a receiver since RSSI values
decrease with distance with a known law. Due to several
factors that affect propagation of electromagnetic waves in
a medium (refractions, reflections, scattering), the received
power vs distance relation varies with the obstacles in the
environment, the height and direction of the antenna and also
the power of the signal transmitted. This results in a non-
biyective and thus multimodal sensor function.
Figure 1 shows a typical two-ray propagation model of RF
signals [11] for a rural environment and a communication
frequency of 433 MHz and transmitter and receiver height
of 2.5 m. It shows the average signal strength of the received
power versus distance. For a given distance the distribution
of RF signal is considered Gaussian and its variance varies
with the signal strength [9]. It is possible to observe that for
a received power of −70 dBm there exist multiple distance
values: 8 m, 15.5 m, 20 m and 43.1 m being one of those the
true value of the tag position. This example shows the multi-
modal probabilistic density function associated with RFID
sensor.
RSSI based localization can be performed using the particle
filter algorithm. Consider a hypothetical scenario of one RFID
tag moving in 2-D and one antenna located at the origin. Let
pi
k denote the ith particle, where pi
k = x ˙x y ˙y
′
. The
target system evolution is given by
f(pi
k−1, vx, vy) =
1 ∆T 0 0
0 1 0 0
0 0 1 ∆T
0 0 0 1
· pi
k−1+
0.5 · ∆T 2
0
∆T 0
0 0.5 · ∆T 2
0 ∆T
·
vx
vy
,
(1)
where vx and vy are drawn from a uniform distribution
U[0, Q].
The pseudocode of the Particle Filter algorithm for the
chosen application and for a set of N particles is described
below:
random initialization of particles;
for i ← 1 to N do
pi
k = f(pi
k−1, vx, vy); //sampling
di
= sqrt(pi
k(1)
2
+ pi
k(3)
2
);
Poti
= Fsensor(di
);
wi
=
1√
2π·σ2
· exp(−(P oti
−P otmeasurement)2
2·σ2 ) //update
end
[ ˆw, ˆp] = resampling(w, pk);
where Potmeasurement is the power measurement of the
received signal whose variance is σ2
and Fsensor(d) is the
mathematical expression of the Two ray propagation model
whose characteristic is shown in Figure 1. Depending on
the obstacles present in the enviroment a more complex
sensor model can be utilized. For the resampling step there
exist several algorithms [12], [5], [13]. Position estimation is
computed by the following equation:
˜x =
N
i=1
ˆpi
· ˆwi
(2)
III. DESIGN
A. Architecture
The PF filter algorithm does not presents data dependence
between particles except for the resampling step. When the
number of particles increases the resampling execution time
can become a bottleneck. A strategy to reduce the resampling
execution time is to divide the total number of particles into
groups so parallelism level is increased [5]. Each particle
group is processed by a dedicated processor. Since the resam-
pling step is sequentially executed, there exists a trade off be-
tween the number of processors and the estimation error: as the
number of particle groups increases, so does the degradation of
the filter [14]. In order to reduce this performance degradation
a particle exchange must be performed among processors. In
[15] an optimization of the particles exchange procedure is
presented. A formal analysis, applying the Kullback-Leibler
divergence, proves that the exchange of particles with largest
weights between adjacent processors results in better accuracy
than a random particle mixing. In [14] this exchange is
performed after resampling thus the selection of particles
with largest weight is avoided. The analysis of algorithm
parallelization has been done in [14] allowing the selection
of an optimal configuration. Once one filter iteration has been
performed, the estimate of each processor is combined in order
to provide a global estimation [15].
The system consists of two modules: the measurement unit
and the processing unit. The system block diagram is shown
in Fig. 2-a. The measurement unit sets up the RSSI value and
computes the reciprocal of the noise variance. The processing
unit performs the PF algorithm and provides an estimated
position.
In order to process thousands of particles in real-time the
processing unit architecture must exploit data level parallelism
and at the same time take into account the strategy described
above. A parallelism level hierarchy is adopted. The first
3. level is performed by introducing multiple processing elements
(PEs) each one performing the PF algorithm steps that do not
present data dependency. The second level consists in gath-
ering PEs in clusters so data input for the resampling step is
made up of the processed particle and weight of each PE inside
a cluster. For the final estimation of position, the estimate of
each cluster is combined as was previously mentioned. Particle
exchange among clusters is also performed.
The proposed VLSI design implements most area consum-
ing operations in external (out of the array) Look-up tables
(LUT). These LUTs are taken away from the processing
element dataflow and put them as global resources. For each
table there is a Broadcast module that sequentially reads the
table and performs interpolation. The interpolated value and
interpolation address are broadcasted to all PEs through buses.
Each PE locally computes its required interpolation address
and compares it with the current value in the bus. If an
equivalence is found, the corresponding data value is acquired
by the PE.
Figure 2-b shows a more detailed architecture of the pro-
cessing unit. It has 4 clusters with 4 PEs each. Sensor measure-
ment and the reciprocal of its variance 1/σ2
are communicated
to all PEs. Four global resources are introduced: Square,
Sqrt, Sensor and Normal LUT. Each broadcast module has
two independent buses: interpolation/address and data/bus.
Resampling, pseudo-random number generator (PRNG) and
Word-to-memory modules inside each cluster are also in-
troduced. All modules are explained in further subsections.
Communication among clusters is not shown to simplify the
diagram.
Each cluster has its own local memory and works without
data dependence of others except when the particle exchange
is performed. Processing elements belonging to a cluster share
local memory.
Regarding control logic, each cluster has its own control
logic that manages main memory reading and writing and also
global control signals. Furthermore, each processing element
that integrates a cluster has a dataflow pipeline whose control
is distributed. Since each pipeline stage has a variable delay
dependent on the time instant when the corresponding value is
present in data bus, global pipeline control is not affordable.
Therefore, each stage has a local control logic dependent on
data events.
B. Cluster Operation
Architecture cluster operation proceeds as follows: while
in execution, each PE inside a cluster reads a particle from
memory. Each Broadcast module sequentially reads its corre-
sponding LUT, interpolates and broadcasts interpolated value
and interpolation address to all PEs. Since the PE dataflow
is pipelined, a single table read is utilised to process several
particles. Main memory has two ports so memory reading and
writing is performed simultaneously.
Two arrays, one made up of particles and another one
of processed weights from each PE are the input for the
Resampling module. Once the arrays have been totally up-
dated, resampling is performed. The elements of resampling
Fig. 2: a) Block Diagram of the VLSI architecture for proposed
tracking system, b) architecture of the processing unit.
arrays are processed sequentially. As soon as one element is
resampled, it is immediately updated by corresponding PE.
Once all data from local memory has been processed
communication among clusters is performed.
C. LUTs design
The functions implemented in the LUTs are: square, square
root, two ray propagation model (as shown in Fig. 1) and
normal distribution. All of them are evaluated with a piecewise
linear function with uniform segmentation. By performing
interpolation, a reduction in table size is achieved. At a point
x ∈ [a , b], a linear interpolation is calculated as follows:
˜f(x) =
f(b) − f(a)
b − a
· (x − a) + f(a) (3)
This operation is performed by the broadcast module shown
in Fig. 3. A counter generates 2N+M
words where the N most
significant bits are used for LUT addressing and the remaining
M bits for interpolation.
The introduced dataflow is composed of several tabulated
functions and interpolations in cascade. When the interpolated
value from a broadcast module is captured by the correspond-
ing pipeline stage, it becomes the interpolation address for the
next tabulated function. It is desirable to find an appropriate
word length for LUT addressing, interpolation and function
value quantization. This length should maximise the ratio
between interpolation address word length and interpolated
value word length. At the same time, the approximation errors
4. Fig. 3: Broadcast Module
TABLE I: Piecewise Linear Function Setup
F unctionN M Q R S Size
Kbits
Range X Interp.
Error
Square 9 2 14 17 - 7 [0,40] 5 · 10−4
Sqrt 10 2 11 13 5 11 [0,3200] 3 · 10−4
Sensor 10 2 10 12 1 10 [0,113] 4 · 10−4
Normal 9 1 10 11 3 5 [0,5] 5 · 10−3
should be reduced since they are propagated through the
dataflow. In this regard, the accuracy analysis introduced in
[16] for practical implementation of piecewise linear functions
is adopted. Table I shows the setup chosen for each piecewise
linear function implementation where N, M, Q are the number
of bits assigned for segmentation, interpolation and function
value quantization. R and S are output data resolution and
discarded input bits. The error introduced by each interpola-
tion, which is calculated as the median of the absolute error
over one thousand samples of evaluation interval, i.e.,
error(x) = mean(|
f(x) − finterp(x))
f(x)
|) (4)
is also included in the table.
The normal distribution implementation requires to evaluate
normal distributions with different values of variance. Any
normal distribution can be obtained from the standard normal
distribution. If a distribution with mean µ and variance σ2
must be evaluated for a value t, the following equations allow
the calculation using only the standard normal distribution
function:
z =
t − µ
σ
, (5)
pNormal =
1
σ
· pStandardNormal(z), (6)
where
PStandardNormal(z) =
1
√
2 · π
· exp(
−z2
2
). (7)
Moreover, as the function is symmetric around the mean,
there is only need to store half of the evaluation interval,
reducing even more the LUT size.
The architecture comprises dual port memories thus the two
values for interpolation can be obtained simultaneously.
Fig. 4: PE Micro-architecture
D. PE Micro Architecture
Each PE sequentially performs the two algorithm steps that
do not present data dependency: sampling and update. Pro-
cessing is divided into several modules in order to implement
a level module pipeline: Sampling, Acquisition Square Value,
Acquisition Sqrt Value, Acquisition Sensor Value and Acqui-
sition Normal Value. Figure 4 shows the pipelined dataflow
microarchitecture.
1) Sampling Unit: The sampling unit processes data from
main memory current location. Memory word datawidth is 48
bits where each particle component has 12 bits. Range for
position and velocity is [−40, 40] m and [−25, 25] m/s. This
unit performs a translation in the plane by using a simplified
version of the dynamic model detailed in (1). This simplifica-
tion allows a reduction in the number of multiplications. For
this design the dynamic model is fixed but future designs will
consider a programable model. The translated positions and
velocities are computed as follows
px(k) = px(k − 1) + vx(k − 1) · △T +
1
2
· nx (8)
py(k) = py(k − 1) + vy(k − 1) · △T +
1
2
· ny (9)
vx(k) = vx(k − 1) + nx (10)
vy(k) = vy(k − 1) + ny (11)
where nx and ny are drawn from a uniform distribution
U[0, W]. Depending on the value of the △T parameter, the
W value should be adjusted in order to provide similar
accelerations than the original model. The random noise is
generated by a linear feedback shift register [LFSR] [17]
with internal XORs of 16 bits with reconfigurable seed. This
pseudo random number generator is a shared resource inside
a cluster. Each PE takes a number at its corresponding turn.
The eight most significant bits are used fot the nx component
and the eight less significant bits for the ny component. Each
component noise is pre-multiplied by the variance value Q.
Either Q and △T registers are programmables of 8 bits length.
The output of the sampling unit has the same datawidht as
its input.
2) Acquisition Value Units: All acquisition units detect
when data input is equal to the current value in the interpola-
tion address bus. This detection is performed with a bitwise xor
operation. When an equivalence is detected the data present
in the data bus is acquired.
5. The Acquisition Square Value unit, performs the sum of the
inputs squared. When x or y are negative two’s complement is
performed. Thus |x| and |y| have 11-bit word length and are
compared to the interpolation address bus. Once the squared
value is captured for both components, sum is performed with
17 bits output data width. The broadcast module for the Sqrt
function provides a 12-bit interpolation address bus. Therefore
the 5 less significant bits of x2
+ y2
are discarded when
Acquisition Sqrt Value unit compares its data input with the
value present in the interpolation address bus. The same occurs
for the block Acquisition Sensor Value with the less significant
bit discarded from its input word.
The Acquisition Normal Value Unit generates a word using
(5) with µ equal to Potmeasurement. Once an equivalence is
detected, the data present in the bus is acquires. Finally it is
multiplied by the reciprocal of the standard deviation as stated
in (6). The reciprocal of the variance has 8-bit width as well as
the power measurement. In order to perform subtraction in (5),
the 5 less significant bits of the input word are discarded. The
word length after this equation is 16 bits. According to Table
I, the tabulated normal function requires 1 interpolation bit,
therefore the 6 less significant bits are not taken into account
resulting in a 10-bit word length. Once data value is captured
by the PE, it is multiplied by the reciprocal of the variance,
resulting in a 19-bit word lentgh.
E. Resampling unit
The resampling algorithm selected for implementation is the
modified Independent Metropolis Hasting (IMH) [12] which
substitutes division operation for comparison and particles
and their weights are sequentially processed. The algorithm
is summarized in the following pseudocode:
wprev = wk
1
;
for i ← 2 to NUMPART ICLES do
u ∼ U(0, 1);
if ( u · wprev > wk
i
) then
wprev = wprev; resample = 1;
else
wprev = wk
i
; resample = 0;
end
end
Algorithm 1: Implemented resampling algorithm
Figure 5 shows the architecture of resampling and word-to-
memory modules. The particle array is fulfilled with output
particles from sampling unit. Both arrays particle array
and weight array must be fully updated in order to initiate
resampling operation. First particle of the whole set is always
resampled. Subsequent particles will be stored in memory
depending on the comparison among their weight and wprev.
The random number generator is implemented with a LFSR
of 16 bits.
The resample signal controls the data stored in memory. If
value of resample is 1, the data present in the particle register
is written to memory else actual processed particle is selected
and wprev is updated.
In order to sinchronize the translated particle with the
pipeline time schedule, it must be delayed as many times as the
Fig. 5: Word-To-memory Architecture
number of pipeline stages in between the sampling unit and the
Acquisition normal value unit. Each PE reads a particle from
a memory location and, once the particle is resampled, word-
to-memory unit stores it at the same location. Since a dual
port memory is considered and the architecture is pipelined,
memory reading and writing is done simultaneously. Control
is achieved with a read address and write adress counter. The
former is dependent on control signals from sampling unit
and the latter is dependent on control signals from word-to-
memory unit.
IV. EXECUTION TIME
Since the execution time of each module is variable, each
PE will complete its processing at different times. The resam-
pling module begins operation when the first PE has finished
processing its particle. Figure 6 shows the execution time of
the dataflow for a cluster made up of two processing elements.
Pipeline delay between output data values is given by the
slowest stage. In the presented design this corresponds to
the stage with the largest interpolation bus address, since it
takes 2N+M
cycles in order to acquire the last interpolated
value. This is the case of the Sqrt function. In the worst case
execution time, a new particle is processed every 4096 cycles.
As resampling takes one cycle to process each particle, the
number of cycles to finish the resampling operation depends
on the number of PEs in a cluster. Therefore, the last element
of the resampling array will be updated every 2N+M
+ P,
where P is the number of PEs in the cluster.
V. RESULTS
A. Simulation Results
A VHDL RTL model of the processing element was de-
veloped. The implementation flow was the following: first a
fixed point Matlab implementation of the processing element
described above was generated and compared to its floating
point counterpart to prove its proper operation. Second, an
RTL model that matches the fixed point Matlab implemen-
tation was developed. At this stage of the implementation,
6. Fig. 6: Filter execution time.
0 2 4 6 8 10 12 14 16 18 20
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Distance [m]
Weight
Matlab Model vs RTL Model
Matlab Model
RTL Model
Fig. 7: Weights vs. distance.
the RSSI measurement, the reciprocal of the noise standard
deviation and the position estimation are generated off-line.
Figure 7 shows the distribution of weights vs distance for
the floating point Matlab implementation and the RTL model,
in the case where the measured power is −55.42 dBm and
σ = 2.5 dBm. Since RSSI measurements are 8-bit quantized,
the normal distribution is also quantized. It can be noticed that
the RTL model provides similar results to the floating point
Matlab implementation.
A 2-D tracking scenario can be simulated to show the
dynamic performance. In this case, the fixed point Matlab
implementation is used, instead of the RTL model, in order
to reduce simulation time. The scenario is composed of a unit
moving at nearly constant velocity and three fixed antennas
s1, s2 and s3 placed at positions: [0, 0], [−20, 0], [0, 20],
respectively. The position of the target unit evolves with time
according to (1).
The mobile initial state is x0 =
[−8m, 12m/s, 10m, −2m/s] and ∆T = 0.1s. The total
number of particles used is 4096, which are uniformly
distributed on a region delimited by the intervals [−20m, 20m]
and [0, π] radians at the beginning of the simulation. Particle
velocities have been randomly initialised with uniform
distribution in the interval [17, 7] for ˙x and [7, −3] for ˙y.
Figure 8 shows the trajectory of the target unit (green
line) and simulation results for the Matlab model and the
RTL model, in red and black lines, respectively. Both models
provide very close results.
−8 −6 −4 −2 0 2 4 6 8
0
5
10
15
20
Floating Point Matlab Model
Fixed Point Matlab Model
Tartet trajectory
Antenna
Fig. 8: Tracking of a moving target with three antennas.
TABLE II: Synthesis Results
Module Area [µm2]
Sampling 37453
Acq. Square Value 6144
Acq. Sqrt Value 2268
Acq. Sensor Value 1932
Acq. Normal Value 13293
Total Area EP 87086
B. Synthesis Results
The RTL model of the processing element described in
section III was synthesized using Synopsis DC Compiler and
0.13µm CMOS technology. Since the array is composed of
several processing elements it is desirable to have the area
required for this basic unit. Table II shows the area of the
processing element and its modules.
VI. CONCLUSIONS
A VLSI architecture for particle filtering in real time was
presented. This architecture exploits the data level parallelism
in the algorithm and also takes into account performance
degradation due to resampling parallelization. Introducing
global resources allows an increase in concurrent hardware.
Processing dataflow was described along with a piecewise lin-
ear function implementation. An RTL model of the proposed
design was generated. Simulation shows that the architecture
correctly implements the PF adapted to the specific applica-
tion. Further work is needed to choose an optimal number of
PEs per cluster.
VII. ACKNOWLEDGMENTS
The results of this paper were partially supported by PICT
2010-2657 3D Gigascale Integrated Circuits for Nonlinear
Computation, Filter and Fusion with Applications in Industrial
Field Robotics of Agencia Nacional de Promoci´on Cient´ıfica y
Tecnol´ogica (ANPCyT) of the Argentine Ministry of Science
and Technology (MINCYT).
REFERENCES
[1] N. Gordon, D. Salmond, and A. F. M. Smith, “Novel approach to
nonlinear/non-gaussian bayesian state estimation,” IEE Proc. Of Radar
and Signal Processing, vol. 140, no. 2, pp. 107–113, 1993.
[2] M. Isard and A. Blake, “Condensation - conditional density propagation
for visual tracking,” International Journal of Computer Vision, vol. 29,
no. 1, pp. 5–28, 1998.
7. [3] D. Fox, “Kld-sampling: Adaptive particle filters and mobile robot
localization,” in Advances in Neural Information Processing Systems
14, vol. 2, 2001, pp. 713–720.
[4] D. F. C Kwok and M. Meila, “Real-time particle filters,” Proceedings
of the IEEE, vol. 92, no. 3, pp. 469–484, Mar 2004.
[5] M. Bolic, P. M. Djuric, and S. Hong, “Resampling algorithms and
architectures for distributed particle filters,” IEEE Transactions on Signal
Processing, vol. 53, no. 7, pp. 2442–2450, July 2005.
[6] A. C. Sankaranarayanan, A. Srivastava, and R. Chellappa, “Algorithmic
and architectural optimizations for computationally efficient particle
filtering,” IEEE transcactions on Image Processing, vol. 17, no. 5, pp.
737–748, May 2008.
[7] S.-S. Chin and S. Hong, “Vlsi design of high-throughput processing
element for real-time particle filtering,” in Signals, Circuits and Systems,
vol. 2, 2003, pp. 617–620.
[8] S. Hong, S. S. Chin, M. Boli, and P. M. Djuric, “Design and implemen-
tation of flexible resampling mechanism for high-speed parallel particle
filters,” Journal of VLSI signal processing systems for signal, image and
video technology, vol. 44, pp. 47–62, 2006.
[9] G. Kloos, J. E. Guivant, E. M. Nebot, and F. Masson, “Range based
localisation using rf and the application to mining safety,” in Proceedings
of the 2006 IEEE/RSJ International Conference on Intelligent Robots
and Systems, Oct 2006, pp. 1304–1311.
[10] S. Sanudo and F. R. Masson, “Desempe˜no del filtro de part´ıculas acotado
en una aplicaci´on de localizaci´on y seguimiento de camiones en una
explotaci´on minera,” in XIV Reunion de Trabajo en Procesamiento de
la Informacion y Control, vol. 1, 2011, pp. 712–717.
[11] H. Xia, H. L. Bertoni, L. Maciel, A. Lindsay-Stewart, and R. Rowe,
“Radio propagation characteristics for line-of-sight microcellular and
personal communications,” IEEE Transactions on Antennas and Propa-
gation, vol. 41, no. 10, pp. 1439–1447, Oct 1993.
[12] L. Miao, J. J. Zhang, C. Chakrabarti, and A. Papandreou-Suppappola,
“Algorithm and parallel implementation of particle filtering and its use
in waveform-agile sensing.” Signal Processing Systems, vol. 65, no. 2,
pp. 211–227.
[13] M. Bolic, P. M. Djuric, and S. Hong, “Resampling algorithms for particle
filters: A computational complexity perspective,” EURASIP Journal on
Applied Signal Processing, vol. 15, pp. 2267–2277, 2004.
[14] A. Pasciaroni, S. Sanudo, J. Rodriguez, F. Masson, and P. Julian,
“Modelling and analysis of parallel particle filters,” in XV Reunion de
Trabajo en Procesamiento de la Informacion y Control, vol. 1, no. 1,
2013, pp. 1–6.
[15] B. Balasingam, M. Bolic, P. Djuric, and J. Miguez, “Efficient distributed
resampling for particle filters,” in IEEE Int. Conf. on Acoustics, Speech
and Signal Processing (ICASSP), 2011, pp. 3772–3775.
[16] O. Lischitz, P. Julian, J. Rodriguez, and O. Agamennoni, “Accuracy
analysis for an on-chip digital pwl realization,” in XIV Reunion de
Trabajo en Procesamiento de la Informacion y Control, 2011, pp. 429–
434.
[17] Z. Barzilai, D. Coppersmith, and A. L. Rosenberg, “Exhaustive gen-
eration of bit patterns with applications to vlsi self-testing,” IEEE
Transactions on Computers, vol. C-32, no. 2, pp. 190–194, Feb 1983.