The basic inverse problem in spectral graph theory consists in determining the graph given its eigenvalue
spectrum. In this paper, we are interested in a network of technological agents whose graph is unknown,
communicating by means of a consensus protocol. Recently, the use of artificial noise added to consensus
signals has been proposed to reconstruct the unknown graph, although errors are possible. On the other
hand, some methodologies have been devised to estimate the eigenvalue spectrum, but noise could interfere
with the elaborations. We combine these two techniques in order to simplify calculations and avoid
topological reconstruction errors, using only one eigenvalue. Moreover, we use an high frequency noise to
reconstruct the network, thus it is easy to filter the control signals after the graph identification. Numerical
simulations of several topologies show an exact and robust reconstruction of the graphs.
PERFORMANCE AND COMPLEXITY ANALYSIS OF A REDUCED ITERATIONS LLL ALGORITHMIJCNCJournal
Multiple-input multiple-output (MIMO) systems are playing an increasing and interesting role in the recent
wireless communication. The complexity and the performance of the systems are driving the different
studies and researches. Lattices Reduction techniques bring more resources to investigate the complexity
and performances of such systems.
In this paper, we look to modify a fixed complexity verity of the LLL algorithm to reduce the computation
operations by reducing the number of iterations without important performance degradation. Our proposal
shows that we can achieve a good performance results while avoiding extra iteration that doesn’t bring
much performance.
Tensor Spectral Clustering is an algorithm that generalizes graph partitioning and spectral clustering methods to account for higher-order network structures. It defines a new objective function called motif conductance that measures how partitions cut motifs like triangles in addition to edges. The algorithm represents a tensor of higher-order random walk transitions as a matrix and computes eigenvectors to find a partition that minimizes the number of motifs cut, allowing networks to be clustered based on higher-order connectivity patterns. Experiments on synthetic and real networks show it can discover meaningful partitions by accounting for motifs that capture important structural relationships.
Notes taken while reading the paper "A Tutorial on Spectral Clustering" by Ulrike von Luxburg. Find original paper at http://www.informatik.uni-hamburg.de/ML/contents/people/luxburg/publications/Luxburg07_tutorial.pdf
Single Channel Speech De-noising Using Kernel Independent Component Analysis...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
This document proposes using spectral clustering based on the normalized graph Laplacian spectrum to solve problems in community detection and handwritten digit recognition. It summarizes the key concepts in graph signal processing and introduces spectral clustering. The paper provides a mathematical proof that the signs of the second eigenvector components of the normalized graph Laplacian can accurately partition a graph into two communities. It then applies this spectral clustering method to community detection and digit recognition, comparing results to other popular algorithms to demonstrate the advantages of the spectral clustering approach.
The Gaussian Process Latent Variable Model (GPLVM)James McMurray
This document provides an outline for a talk on Gaussian Process Latent Variable Models (GPLVM). It begins with an introduction to why latent variable models are useful for dimensionality reduction. It then defines latent variable models and shows their graphical model representation. The document reviews PCA and introduces probabilistic versions like Probabilistic PCA (PPCA) and Dual PPCA. It describes how GPLVM generalizes these approaches using Gaussian processes. Examples applying GPLVM to face and motion data are provided, along with practical tips and an overview of GPLVM variants.
GRAPH MATCHING ALGORITHM FOR TASK ASSIGNMENT PROBLEMIJCSEA Journal
Task assignment is one of the most challenging problems in distributed computing environment. An optimal task assignment guarantees minimum turnaround time for a given architecture. Several approaches of optimal task assignment have been proposed by various researchers ranging from graph partitioning based tools to heuristic graph matching. Using heuristic graph matching, it is often impossible to get optimal task assignment for practical test cases within an acceptable time limit. In this paper, we have parallelized the basic heuristic graph-matching algorithm of task assignment which is suitable only for cases where processors and inter processor links are homogeneous. This proposal is a derivative of the basic task assignment methodology using heuristic graph matching. The results show that near optimal assignments are obtained much faster than the sequential program in all the cases with reasonable speed-up.
Kernal based speaker specific feature extraction and its applications in iTau...TELKOMNIKA JOURNAL
This document summarizes kernel-based speaker recognition techniques for an automatic speaker recognition system (ASR) in iTaukei cross-language speech recognition. It discusses kernel principal component analysis (KPCA), kernel independent component analysis (KICA), and kernel linear discriminant analysis (KLDA) for nonlinear speaker-specific feature extraction to improve ASR classification rates. Evaluation of the ASR system using these techniques on a Japanese language corpus and self-recorded iTaukei corpus showed that KLDA achieved the best performance, with an equal error rate improvement of up to 8.51% compared to KPCA and KICA.
PERFORMANCE AND COMPLEXITY ANALYSIS OF A REDUCED ITERATIONS LLL ALGORITHMIJCNCJournal
Multiple-input multiple-output (MIMO) systems are playing an increasing and interesting role in the recent
wireless communication. The complexity and the performance of the systems are driving the different
studies and researches. Lattices Reduction techniques bring more resources to investigate the complexity
and performances of such systems.
In this paper, we look to modify a fixed complexity verity of the LLL algorithm to reduce the computation
operations by reducing the number of iterations without important performance degradation. Our proposal
shows that we can achieve a good performance results while avoiding extra iteration that doesn’t bring
much performance.
Tensor Spectral Clustering is an algorithm that generalizes graph partitioning and spectral clustering methods to account for higher-order network structures. It defines a new objective function called motif conductance that measures how partitions cut motifs like triangles in addition to edges. The algorithm represents a tensor of higher-order random walk transitions as a matrix and computes eigenvectors to find a partition that minimizes the number of motifs cut, allowing networks to be clustered based on higher-order connectivity patterns. Experiments on synthetic and real networks show it can discover meaningful partitions by accounting for motifs that capture important structural relationships.
Notes taken while reading the paper "A Tutorial on Spectral Clustering" by Ulrike von Luxburg. Find original paper at http://www.informatik.uni-hamburg.de/ML/contents/people/luxburg/publications/Luxburg07_tutorial.pdf
Single Channel Speech De-noising Using Kernel Independent Component Analysis...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
This document proposes using spectral clustering based on the normalized graph Laplacian spectrum to solve problems in community detection and handwritten digit recognition. It summarizes the key concepts in graph signal processing and introduces spectral clustering. The paper provides a mathematical proof that the signs of the second eigenvector components of the normalized graph Laplacian can accurately partition a graph into two communities. It then applies this spectral clustering method to community detection and digit recognition, comparing results to other popular algorithms to demonstrate the advantages of the spectral clustering approach.
The Gaussian Process Latent Variable Model (GPLVM)James McMurray
This document provides an outline for a talk on Gaussian Process Latent Variable Models (GPLVM). It begins with an introduction to why latent variable models are useful for dimensionality reduction. It then defines latent variable models and shows their graphical model representation. The document reviews PCA and introduces probabilistic versions like Probabilistic PCA (PPCA) and Dual PPCA. It describes how GPLVM generalizes these approaches using Gaussian processes. Examples applying GPLVM to face and motion data are provided, along with practical tips and an overview of GPLVM variants.
GRAPH MATCHING ALGORITHM FOR TASK ASSIGNMENT PROBLEMIJCSEA Journal
Task assignment is one of the most challenging problems in distributed computing environment. An optimal task assignment guarantees minimum turnaround time for a given architecture. Several approaches of optimal task assignment have been proposed by various researchers ranging from graph partitioning based tools to heuristic graph matching. Using heuristic graph matching, it is often impossible to get optimal task assignment for practical test cases within an acceptable time limit. In this paper, we have parallelized the basic heuristic graph-matching algorithm of task assignment which is suitable only for cases where processors and inter processor links are homogeneous. This proposal is a derivative of the basic task assignment methodology using heuristic graph matching. The results show that near optimal assignments are obtained much faster than the sequential program in all the cases with reasonable speed-up.
Kernal based speaker specific feature extraction and its applications in iTau...TELKOMNIKA JOURNAL
This document summarizes kernel-based speaker recognition techniques for an automatic speaker recognition system (ASR) in iTaukei cross-language speech recognition. It discusses kernel principal component analysis (KPCA), kernel independent component analysis (KICA), and kernel linear discriminant analysis (KLDA) for nonlinear speaker-specific feature extraction to improve ASR classification rates. Evaluation of the ASR system using these techniques on a Japanese language corpus and self-recorded iTaukei corpus showed that KLDA achieved the best performance, with an equal error rate improvement of up to 8.51% compared to KPCA and KICA.
Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...IDES Editor
Wireless communication systems are affected by
inter-symbol interference (ISI), co-channel interference in
the presence of additive white Gaussian noise. ISI is primarily
due to the distortion caused by frequency and time selectivity
of the fading channel and it causes performance degradation.
Equalization techniques are used to mitigate the effect of ISI
and noise for better demodulation. This paper presents a novel
technique for channel equalization. Here a Signed Regressor
adaptive algorithm based on FLANN (Functional Link Artificial
Neural Network) has been developed for nonlinear channel
equalization along with the analysis of MSE and BER. The
results are compared with the conventional adaptive LMS
algorithm based FLANN model. The Signed Regressor FLANN
shows better performance as compared to LMS based FLANN.
The equalizer presented shows considerable performance
compared to the other adaptive structure for both the linear
and non-linear models in terms of convergence rate, MSE
and BER over a wide range.
Implementation and Impact of LNS MAC Units in Digital Filter ApplicationIJTET Journal
The logarithmic number system (LNS) is an efficient way to represent data in VLSI processors because its round-off error behavior resembles that of floating point arithmetic. LNS reduce the power dissipation in signal-processing-related application such as hearing-aid devices, video processing and error control. This paper presents techniques for low-power addition/subtraction in the LNS and quantifies their impact on digital filter VLSI implementation. The operation of addition and subtraction are difficult to perform in LNS as complex look up tables (LUTs) are needed. The impact of partitioning the look-up-tables required for LNS addition/subtraction on complexity performance and power dissipation is quantified. LNS base and LNS word are the two design parameters exploited to minimize complexity. A round-off noise model is used to demonstrate the impact of base and word-length on SNR of the output of FIR filters. In addition, techniques for low-power implementation of an LNS multiply accumulate (MAC) units are investigated. The proposed techniques can be extended to co-transformation-based circuits that employ interpolators. The results are demonstrated by evaluating the power dissipation, complexity and performance of several FIR filter configurations comprising one, two or four MAC units. Simulation of placed and routed VLSI LNS-based digital filters using Xilinx ISE reveal that significant power dissipation savings are possible by using optimized LNS circuits at no performance penalty, when compared to linear fixed-point two’s-complement equivalents.
This paper compares two logarithmic coding techniques for adaptive beamforming in wireless communications. One technique uses a direct lookup table conversion, while the other uses linear interpolation with a smaller lookup table and multiplier. Matlab simulations show that both logarithmic techniques cause small errors for address precisions above 9 bits for direct conversion and 5 bits for interpolation conversion. The results indicate the logarithmic methods provide better error performance than a fixed-point implementation, while requiring less hardware cost.
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
This document summarizes a research paper that proposes a novel architecture for implementing a 1D lifting integer wavelet transform (IWT) using residue number system (RNS). The key aspects covered are:
1) RNS offers advantages over binary representations for digital signal processing by avoiding carry propagation. A ROM-based approach is proposed for RNS division.
2) The lifting scheme for discrete wavelet transforms is summarized, including split, predict, and update stages.
3) A novel RNS-based architecture is proposed using three main blocks - split, predict, and update - that repeat at each decomposition level. Pipelined implementations of the predict and update blocks are detailed.
The document proposes a novel Spatial Fuzzy C-Means (PET-SFCM) clustering algorithm to segment PET scan images of patients with neurodegenerative disorders like Alzheimer's disease. The algorithm incorporates spatial neighborhood information into the traditional Fuzzy C-Means algorithm. It was tested on real patient data sets and showed satisfactory results compared to conventional FCM and K-Means clustering algorithms. The PET-SFCM algorithm provides an effective way to segment PET images and analyze brain changes related to neurological conditions.
Discrete-wavelet-transform recursive inverse algorithm using second-order est...TELKOMNIKA JOURNAL
The recursive-least-squares (RLS) algorithm was introduced as an alternative to LMS algorithm with enhanced performance. Computational complexity and instability in updating the autocolleltion matrix are some of the drawbacks of the RLS algorithm that were among the reasons for the intrduction of the second-order recursive inverse (RI) adaptive algorithm. The 2nd order RI adaptive algorithm suffered from low convergence rate in certain scenarios that required a relatively small initial step-size. In this paper, we propose a newsecond-order RI algorithm that projects the input signal to a new domain namely discrete-wavelet-transform (DWT) as pre step before performing the algorithm. This transformation overcomes the low convergence rate of the second-order RI algorithm by reducing the self-correlation of the input signal in the mentioned scenatios. Expeirments are conducted using the noise cancellation setting. The performance of the proposed algorithm is compared to those of the RI, original second-order RI and RLS algorithms in different Gaussian and impulsive noise environments. Simulations demonstrate the superiority of the proposed algorithm in terms of convergence rate comparedto those algorithms.
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
The document compares two algorithms for hyperspectral image unmixing - one based on minimum volume constraint and one based on sum of squared distances constraint. It analyzes the performance of the two algorithms under different conditions like flatness of the endmember simplex, effects of initialization, and robustness to noise. The analysis shows that the sum of squared distances constraint performs better than the volume constraint for non-regular simplex shapes and is more robust to random initialization and noise. The comparison provides guidance on which constraint is more suitable for specific hyperspectral unmixing tasks.
A Unifying Probabilistic Perspective for Spectral Dimensionality Reduction:Sean Golliher
This document presents a unifying probabilistic perspective for spectral dimensionality reduction methods. It introduces the Maximum Entropy Unfolding (MEU) algorithm as a unified approach that other methods like Local Linear Embedding (LLE) are special cases of. MEU models dimensionality reduction as a density estimation problem with constraints, using a Gaussian random field to represent the density. It also introduces the Acyclic Locally Linear Embedding (ALLE) and Dimensionality Reduction through Regularization of the Inverse covariance in the Log Likelihood (DRILL) algorithms. Experimental results on motion capture and robot navigation data are presented to compare the performance of these methods.
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
This document compares different techniques for texture classification, including wavelet transforms and co-occurrence matrices. It finds that the Haar wavelet technique is the most efficient in terms of time complexity and classification accuracy, except when images are rotated. The co-occurrence matrix method has higher time requirements but excellent classification results, except for rotated images where accuracy is greatly reduced due to its dependence on pixel values. Overall, the Haar wavelet proves to be the best method for texture classification based on the performance assessment parameters of time complexity and classification accuracy.
This document summarizes a method for calculating the sensitivity matrix that defines the linear relationship between circuit parameters and poles/response of an RLC network. The sensitivity matrix enables efficient statistical analysis and yield predictions. It is obtained by taking derivatives of the poles and transfer function, which are calculated from the eigenvalues and eigenvectors of the network's state equation. An example RLC circuit demonstrates calculating the sensitivity matrix and using it to predict yield based on Monte Carlo simulations.
This document summarizes a survey on graph partitioning algorithms. It begins by defining the graph partitioning problem and describing its applications in areas like VLSI design and parallel finite element methods. It then provides an overview of several categories of sequential graph partitioning algorithms, including local improvement methods like Kernighan-Lin and Fiduccia-Mattheyses, as well as discussing parallel partitioning algorithms and conclusions from experimental comparisons of different approaches.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
3 - A critical review on the usual DCT Implementations (presented in a Malays...Youness Lahdili
1) The document reviews and compares various implementations of the Discrete Cosine Transform (DCT), which is widely used in video and image compression standards.
2) It finds that the Loeffler algorithm achieves the theoretical minimum of 11 multiplications for an 8-point DCT, while the Arai algorithm requires only 5 multiplications and 29 additions.
3) The document concludes that while the DCT has been very successful, other transforms like the Discrete Wavelet Transform used in the Daala standard may provide alternatives worth further research to reduce blocking artifacts at high compression.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
Massive parallelism with gpus for centrality ranking in complex networksijcsit
Many problems in Computer Science can be modelled using graphs. Evaluating node centrality in complex
networks, which can be considered equivalent to undirected graphs, provides an useful metric of the
relative importance of each node inside the evaluated network. The knowledge on which the most central
nodes are, has various applications, such as improving information spreading in diffusion networks. In this
case, most central nodes can be considered to have higher influence rates over other nodes in the network.
The main purpose in this work is developing a GPU based and massively parallel application so as to
evaluate the node centrality in complex networks using the Nvidia CUDA programming model. The main
contribution of this work is the strategies for the development of an algorithm to evaluate the node
centrality in complex networks using Nvidia CUDA parallel programming model. We show that the
strategies improves algorithm´s speed-up in two orders of magnitude on one NVIDIA Tesla k20 GPU
cluster node, when compared to the hybrid OpenMP/MPI algorithm version, running in the same cluster,
with 4 nodes 2 Intel(R) Xeon(R) CPU E5-2660 each, for radius zero
JAVA BASED VISUALIZATION AND ANIMATION FOR TEACHING THE DIJKSTRA SHORTEST PAT...ijseajournal
This document describes a Java software tool developed to help transportation engineering students understand the Dijkstra shortest path algorithm. The software provides an intuitive interface for generating transportation networks and animating how the shortest path is updated at each iteration of the Dijkstra algorithm. It offers multiple visual representations like color mapping and tables. The software can step through each iteration or run continuously, and includes voice narratives in different languages to further aid comprehension. A demo video of the animation and results is available online.
This document summarizes a research paper that designed and implemented sphere decoding (SD) for multiple-input multiple-output (MIMO) systems using an FPGA. It used Newton's iterative method to calculate the matrix inverse as part of the SD algorithm, which reduces complexity compared to direct matrix inversion. The authors implemented SD for a 2x2 MIMO system with 4-QAM modulation. Simulation results showed that Newton's method converged after 7 iterations, and SD successfully calculated the minimum Euclidean distance vector.
A preliminary evaluation of bandwidth allocation model dynamic switchingIJCNCJournal
Bandwidth Allocation Models (BAMs) are used in orde
r to define Bandwidth Constraints (BCs) in a per-
class basis for MPLS/DS-TE networks and effectively
define how network resources like bandwidth are
obtained and shared by applications. The BAMs propo
sed (MAM – Maximum Allocation Model, RDM –
Russian Dolls Model, G-RDM – Generic RDM and AllocT
C-Sharing) attempt to optimize the use of
bandwidth resources on a per-link basis with differ
ent allocation and resource sharing characteristics
. As
such, the adoption of distinct BAMs and/or changes
in network resource demands (network traffic profil
e)
may result in different network traffic allocation
and operational behavior for distinct BAMs. This pa
per
evaluates the resulting network characteristics (li
nk utilization, preemption and flows blocking) of u
sing
BAMs dynamically with different traffic scenarios.
In brief, it is investigated the dynamics of BAM
switching with distinct traffic scenarios. The pape
r presents initially the investigated BAMs in relat
ion to
their behavior and resource allocation characterist
ics. Then, distinct BAMs are compared using differe
nt
traffic scenarios in order to investigate the impac
t of a dynamic change of the BAM configured in the
network. Finally, the paper shows that the adoption
of a dynamic BAM allocation strategy may result in
benefits for network operation in terms of link uti
lization, preemption and flows blocking.
HONEYPOTLABSAC: A VIRTUAL HONEYPOT FRAMEWORK FOR ANDROIDIJCNCJournal
1. The document presents HoneypotLabsac, a virtual honeypot framework for Android that aims to learn more about targeted mobile device attacks.
2. The framework allows emulating services like telnet, HTTP, and SMS to collect log data from interactions without compromising the actual device.
3. HoneypotLabsac generates log files of all emulated service connections and SMS messages, stores them on the device, and sends them periodically to a log server for analysis.
Adaptive Channel Equalization for Nonlinear Channels using Signed Regressor F...IDES Editor
Wireless communication systems are affected by
inter-symbol interference (ISI), co-channel interference in
the presence of additive white Gaussian noise. ISI is primarily
due to the distortion caused by frequency and time selectivity
of the fading channel and it causes performance degradation.
Equalization techniques are used to mitigate the effect of ISI
and noise for better demodulation. This paper presents a novel
technique for channel equalization. Here a Signed Regressor
adaptive algorithm based on FLANN (Functional Link Artificial
Neural Network) has been developed for nonlinear channel
equalization along with the analysis of MSE and BER. The
results are compared with the conventional adaptive LMS
algorithm based FLANN model. The Signed Regressor FLANN
shows better performance as compared to LMS based FLANN.
The equalizer presented shows considerable performance
compared to the other adaptive structure for both the linear
and non-linear models in terms of convergence rate, MSE
and BER over a wide range.
Implementation and Impact of LNS MAC Units in Digital Filter ApplicationIJTET Journal
The logarithmic number system (LNS) is an efficient way to represent data in VLSI processors because its round-off error behavior resembles that of floating point arithmetic. LNS reduce the power dissipation in signal-processing-related application such as hearing-aid devices, video processing and error control. This paper presents techniques for low-power addition/subtraction in the LNS and quantifies their impact on digital filter VLSI implementation. The operation of addition and subtraction are difficult to perform in LNS as complex look up tables (LUTs) are needed. The impact of partitioning the look-up-tables required for LNS addition/subtraction on complexity performance and power dissipation is quantified. LNS base and LNS word are the two design parameters exploited to minimize complexity. A round-off noise model is used to demonstrate the impact of base and word-length on SNR of the output of FIR filters. In addition, techniques for low-power implementation of an LNS multiply accumulate (MAC) units are investigated. The proposed techniques can be extended to co-transformation-based circuits that employ interpolators. The results are demonstrated by evaluating the power dissipation, complexity and performance of several FIR filter configurations comprising one, two or four MAC units. Simulation of placed and routed VLSI LNS-based digital filters using Xilinx ISE reveal that significant power dissipation savings are possible by using optimized LNS circuits at no performance penalty, when compared to linear fixed-point two’s-complement equivalents.
This paper compares two logarithmic coding techniques for adaptive beamforming in wireless communications. One technique uses a direct lookup table conversion, while the other uses linear interpolation with a smaller lookup table and multiplier. Matlab simulations show that both logarithmic techniques cause small errors for address precisions above 9 bits for direct conversion and 5 bits for interpolation conversion. The results indicate the logarithmic methods provide better error performance than a fixed-point implementation, while requiring less hardware cost.
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
This document summarizes a research paper that proposes a novel architecture for implementing a 1D lifting integer wavelet transform (IWT) using residue number system (RNS). The key aspects covered are:
1) RNS offers advantages over binary representations for digital signal processing by avoiding carry propagation. A ROM-based approach is proposed for RNS division.
2) The lifting scheme for discrete wavelet transforms is summarized, including split, predict, and update stages.
3) A novel RNS-based architecture is proposed using three main blocks - split, predict, and update - that repeat at each decomposition level. Pipelined implementations of the predict and update blocks are detailed.
The document proposes a novel Spatial Fuzzy C-Means (PET-SFCM) clustering algorithm to segment PET scan images of patients with neurodegenerative disorders like Alzheimer's disease. The algorithm incorporates spatial neighborhood information into the traditional Fuzzy C-Means algorithm. It was tested on real patient data sets and showed satisfactory results compared to conventional FCM and K-Means clustering algorithms. The PET-SFCM algorithm provides an effective way to segment PET images and analyze brain changes related to neurological conditions.
Discrete-wavelet-transform recursive inverse algorithm using second-order est...TELKOMNIKA JOURNAL
The recursive-least-squares (RLS) algorithm was introduced as an alternative to LMS algorithm with enhanced performance. Computational complexity and instability in updating the autocolleltion matrix are some of the drawbacks of the RLS algorithm that were among the reasons for the intrduction of the second-order recursive inverse (RI) adaptive algorithm. The 2nd order RI adaptive algorithm suffered from low convergence rate in certain scenarios that required a relatively small initial step-size. In this paper, we propose a newsecond-order RI algorithm that projects the input signal to a new domain namely discrete-wavelet-transform (DWT) as pre step before performing the algorithm. This transformation overcomes the low convergence rate of the second-order RI algorithm by reducing the self-correlation of the input signal in the mentioned scenatios. Expeirments are conducted using the noise cancellation setting. The performance of the proposed algorithm is compared to those of the RI, original second-order RI and RLS algorithms in different Gaussian and impulsive noise environments. Simulations demonstrate the superiority of the proposed algorithm in terms of convergence rate comparedto those algorithms.
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
The document compares two algorithms for hyperspectral image unmixing - one based on minimum volume constraint and one based on sum of squared distances constraint. It analyzes the performance of the two algorithms under different conditions like flatness of the endmember simplex, effects of initialization, and robustness to noise. The analysis shows that the sum of squared distances constraint performs better than the volume constraint for non-regular simplex shapes and is more robust to random initialization and noise. The comparison provides guidance on which constraint is more suitable for specific hyperspectral unmixing tasks.
A Unifying Probabilistic Perspective for Spectral Dimensionality Reduction:Sean Golliher
This document presents a unifying probabilistic perspective for spectral dimensionality reduction methods. It introduces the Maximum Entropy Unfolding (MEU) algorithm as a unified approach that other methods like Local Linear Embedding (LLE) are special cases of. MEU models dimensionality reduction as a density estimation problem with constraints, using a Gaussian random field to represent the density. It also introduces the Acyclic Locally Linear Embedding (ALLE) and Dimensionality Reduction through Regularization of the Inverse covariance in the Log Likelihood (DRILL) algorithms. Experimental results on motion capture and robot navigation data are presented to compare the performance of these methods.
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
This document compares different techniques for texture classification, including wavelet transforms and co-occurrence matrices. It finds that the Haar wavelet technique is the most efficient in terms of time complexity and classification accuracy, except when images are rotated. The co-occurrence matrix method has higher time requirements but excellent classification results, except for rotated images where accuracy is greatly reduced due to its dependence on pixel values. Overall, the Haar wavelet proves to be the best method for texture classification based on the performance assessment parameters of time complexity and classification accuracy.
This document summarizes a method for calculating the sensitivity matrix that defines the linear relationship between circuit parameters and poles/response of an RLC network. The sensitivity matrix enables efficient statistical analysis and yield predictions. It is obtained by taking derivatives of the poles and transfer function, which are calculated from the eigenvalues and eigenvectors of the network's state equation. An example RLC circuit demonstrates calculating the sensitivity matrix and using it to predict yield based on Monte Carlo simulations.
This document summarizes a survey on graph partitioning algorithms. It begins by defining the graph partitioning problem and describing its applications in areas like VLSI design and parallel finite element methods. It then provides an overview of several categories of sequential graph partitioning algorithms, including local improvement methods like Kernighan-Lin and Fiduccia-Mattheyses, as well as discussing parallel partitioning algorithms and conclusions from experimental comparisons of different approaches.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
3 - A critical review on the usual DCT Implementations (presented in a Malays...Youness Lahdili
1) The document reviews and compares various implementations of the Discrete Cosine Transform (DCT), which is widely used in video and image compression standards.
2) It finds that the Loeffler algorithm achieves the theoretical minimum of 11 multiplications for an 8-point DCT, while the Arai algorithm requires only 5 multiplications and 29 additions.
3) The document concludes that while the DCT has been very successful, other transforms like the Discrete Wavelet Transform used in the Daala standard may provide alternatives worth further research to reduce blocking artifacts at high compression.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
Massive parallelism with gpus for centrality ranking in complex networksijcsit
Many problems in Computer Science can be modelled using graphs. Evaluating node centrality in complex
networks, which can be considered equivalent to undirected graphs, provides an useful metric of the
relative importance of each node inside the evaluated network. The knowledge on which the most central
nodes are, has various applications, such as improving information spreading in diffusion networks. In this
case, most central nodes can be considered to have higher influence rates over other nodes in the network.
The main purpose in this work is developing a GPU based and massively parallel application so as to
evaluate the node centrality in complex networks using the Nvidia CUDA programming model. The main
contribution of this work is the strategies for the development of an algorithm to evaluate the node
centrality in complex networks using Nvidia CUDA parallel programming model. We show that the
strategies improves algorithm´s speed-up in two orders of magnitude on one NVIDIA Tesla k20 GPU
cluster node, when compared to the hybrid OpenMP/MPI algorithm version, running in the same cluster,
with 4 nodes 2 Intel(R) Xeon(R) CPU E5-2660 each, for radius zero
JAVA BASED VISUALIZATION AND ANIMATION FOR TEACHING THE DIJKSTRA SHORTEST PAT...ijseajournal
This document describes a Java software tool developed to help transportation engineering students understand the Dijkstra shortest path algorithm. The software provides an intuitive interface for generating transportation networks and animating how the shortest path is updated at each iteration of the Dijkstra algorithm. It offers multiple visual representations like color mapping and tables. The software can step through each iteration or run continuously, and includes voice narratives in different languages to further aid comprehension. A demo video of the animation and results is available online.
This document summarizes a research paper that designed and implemented sphere decoding (SD) for multiple-input multiple-output (MIMO) systems using an FPGA. It used Newton's iterative method to calculate the matrix inverse as part of the SD algorithm, which reduces complexity compared to direct matrix inversion. The authors implemented SD for a 2x2 MIMO system with 4-QAM modulation. Simulation results showed that Newton's method converged after 7 iterations, and SD successfully calculated the minimum Euclidean distance vector.
A preliminary evaluation of bandwidth allocation model dynamic switchingIJCNCJournal
Bandwidth Allocation Models (BAMs) are used in orde
r to define Bandwidth Constraints (BCs) in a per-
class basis for MPLS/DS-TE networks and effectively
define how network resources like bandwidth are
obtained and shared by applications. The BAMs propo
sed (MAM – Maximum Allocation Model, RDM –
Russian Dolls Model, G-RDM – Generic RDM and AllocT
C-Sharing) attempt to optimize the use of
bandwidth resources on a per-link basis with differ
ent allocation and resource sharing characteristics
. As
such, the adoption of distinct BAMs and/or changes
in network resource demands (network traffic profil
e)
may result in different network traffic allocation
and operational behavior for distinct BAMs. This pa
per
evaluates the resulting network characteristics (li
nk utilization, preemption and flows blocking) of u
sing
BAMs dynamically with different traffic scenarios.
In brief, it is investigated the dynamics of BAM
switching with distinct traffic scenarios. The pape
r presents initially the investigated BAMs in relat
ion to
their behavior and resource allocation characterist
ics. Then, distinct BAMs are compared using differe
nt
traffic scenarios in order to investigate the impac
t of a dynamic change of the BAM configured in the
network. Finally, the paper shows that the adoption
of a dynamic BAM allocation strategy may result in
benefits for network operation in terms of link uti
lization, preemption and flows blocking.
HONEYPOTLABSAC: A VIRTUAL HONEYPOT FRAMEWORK FOR ANDROIDIJCNCJournal
1. The document presents HoneypotLabsac, a virtual honeypot framework for Android that aims to learn more about targeted mobile device attacks.
2. The framework allows emulating services like telnet, HTTP, and SMS to collect log data from interactions without compromising the actual device.
3. HoneypotLabsac generates log files of all emulated service connections and SMS messages, stores them on the device, and sends them periodically to a log server for analysis.
ON THE OPTIMIZATION OF BITTORRENT-LIKE PROTOCOLS FOR INTERACTIVE ON-DEMAND ST...IJCNCJournal
This paper proposes two novel optimized BitTorrent-like protocols for interactive multimedia streaming: the Simple Interactive Streaming Protocol (SISP) and the Exclusive Interactive Streaming Protocol (EISP). The former chiefly seeks a trade-off between playback continuity and data diversity, while the latter is mostly focused on playback continuity. To assure a thorough and up-to-date approach, related work is carefully examined and important open issues, concerning the design of BitTorrent-like algorithms, are analyzed as well. Through simulations, in a variety of near-real file replication scenarios, the novel protocols are evaluated using distinct performance metrics. Among the major findings, the final results show that the two novel proposals are efficient and, besides, focusing on playback continuity ends up being the best design concept to achieve high quality of service. Lastly, avenues for further research are included at the end of this paper as well
A Secure Data Communication System Using Cryptography and SteganographyIJCNCJournal
The information security has become one of the most significant problems in data communication. So it
becomes an inseparable part of data communication. In order to address this problem, cryptography and
steganography can be combined. This paper proposes a secure communication system. It employs
cryptographic algorithm together with steganography. The jointing of these techniques provides a robust
and strong communication system that able to withstand against attackers. In this paper, the filter bank
cipher is used to encrypt the secret text message, it provide high level of security, scalability and speed.
After that, a discrete wavelet transforms (DWT) based steganography is employed to hide the encrypted
message in the cover image by modifying the wavelet coefficients. The performance of the proposed system
is evaluated using peak signal to noise ratio (PSNR) and histogram analysis. The simulation results show
that, the proposed system provides high level of security.
In network aggregation using efficient routing techniques for event driven se...IJCNCJournal
Sensors used in applications such as agriculture, weather , etc., monitoring physical parameters like soil moisture, temperature, humidity, will have to sustain their battery power for long intervals of time. In order to accomplish this, parameter which assists in reducing the consumption of power from battery need to be attended to. One of the factors affecting the consumption of energy is transmit and receive power. This energy consumption can be reduced by avoiding unnecessary transmission and reception. Efficient routing techniques and incorporating aggregation whenever possible can save considerable amount of energy. Aggregation reduces repeated transmission of relative values and also reduces lot of computation at the base station. In this paper, the benefits of aggregation over direct transmission in saving the amount of energy consumed is discussed. Routing techniques which assist aggregation are incorporated. Aspects like transmission of average value of sensed data around an area of the network, minimum value in the whole of the network, triggering of event when there is low battery are assimilated.
A child's diet and his nutrition is a growing concern in Mexico. Obesity rates triple in last 3 decades in our
country. Thus, a novel and attractive tool designed for youth is deployed. It has as main objective reduce
this problem in a way friendly and intelligent. It includes a planning system based on propositional logic
named DLVK and a novel architecture that allows us to extend our solution to mobile devices. In this
proposal we consider goal-based agents that use more advanced structured representations and are usually
called planning agents. These agents are supposed to maximize their performance measure. Achieving this
is sometimes simplified if the agent can adopt a goal and aim at satisfying it. Thus, first we present the
well-known problem called The Wumpus world. Second, present an application to assist in the solution of
Nutrition in Mexico. Planning for nourishing using DLVK implemented for mobile devices through the WAP
protocol using GPRS service is deployed. Both problems viewed as a logic program (into DLVK) whose
answer sets corresponds to solutions. Plans correspond to answer sets for these programs, in the spirit of
answer set programming.
AN EFFICIENT SPECTRUM SHARING METHOD BASED ON GENETIC ALGORITHM IN HETEROGENE...IJCNCJournal
With advances in wireless communication technologies, users can have rich contents not only via wired
networks but also via wireless networks such as Cellular, WiFi, and WiMAX. On the other hand, however, lack of spectrum resources becomes an important problem for future wireless networks. To overcome this problem, dynamic spectrum access technology receives much attention. In this paper, we propose a novel spectrum sharing method based on genetic algorithm in which a WiFi system temporarily uses a spectrum band of WiMAX system in WiFi/WiMAX integrated networks as a typical
heterogeneous wireless network. Finally, we confirm the effectiveness of the proposed method by simulation experiments
This document summarizes and compares the performance of three asymmetric cryptographic algorithms (RSA, ECC, and MQQ) on ARM processor-based embedded systems. It provides background on each algorithm, including how they work and their computational complexities. The document then describes simulations conducted using the SimpleScalar tool to analyze the processing time, memory usage, and processor usage of each algorithm. The results showed that the MQQ algorithm performed better than ECC and RSA on embedded systems in terms of these metrics.
Peak detection using wavelet transformIJCNCJournal
A new work based-wavelet transform is designed to overcome one of the main drawbacks that found in the
present new technologies. Orthogonal Frequency Division Multiplexing (OFDM)is proposed in the
literature to enhance the multimedia resolution. However, the high peak power (PAPR) values will obstruct
such achievements. Therefore, a new proposition is found in this work, making use of the wavelet
transforms methods, and it is divided into three main stages; de-noising stage, thresholding stage and then
the replacement stage.
In order to check the system stages validity; a mathematical model has been built and its checked after
using a MATLAB simulation. A simulated bit error rate (BER) achievement will be compared with our
previously published work, where an enhancement from 8×10-1 to be 5×10-1 is achieved. Moreover, these
results will be compared to the work found in the literature, where we have accomplished around 27%
PAPR extra reduction.
As a result, the BER performance has been improved for the same bandwidth occupancy. Moreover and
due to the de-noise stage, the verification rate has been improved to reach 81%. This is in addition to the
noise immunity enhancement.
Comparative analyisis on some possible partnership schemes of global ip excha...IJCNCJournal
IPX (IP eXchange) is GSMA’s proposal for IP interconnection model which supports multi services to offer
end-to-end QoS, security, interoperability, SLAs through a dedicated connection. It provides a commercial
and technical solution to manage IP traffic and follows the GSMA’s 4 key IP interworking principle such as
openness, quality, cascading payments, and efficient connectivity. In order to get global IPX reachability, it
is possible for an IPX provider to build partnership with other global IPX providers in business and
network configuration. There are some possible partnership schemes between IPX providers such as
peering mode, semi-hosted mode, full-hosted mode, or combination between these modes. The
implementation of the schemes will be case-by-case basis with some considerations based on (but not
limited to) IPX Provider’s network asset & coverage, services & features offer, commercial offer, and
customers. For an IPX provider to become competitive in IPX business and become a global IPX hub, some
value added should be considered such as cost efficiency and great network performance. To achieve it, an
IPX provider could implement some strategies such as build network sinergy between them and partners to
develop IPX Service as single offering, offer their customers with bundled access network and services. An
IPX provider should also consider their existing customer-based that can be a benefit to their bargaining
position to other potential IPX provider partners to determine price and business scheme for partnership.
AN ADAPTIVE PSEUDORANDOM STEGO-CRYPTO TECHNIQUE FOR DATA COMMUNICATIONIJCNCJournal
The document describes a proposed adaptive pseudorandom stego-crypto technique for data communication. The technique combines stream cipher cryptography with a modified pseudorandom LSB substitution technique. This provides an evenly distributed cipher text while also enhancing security through increased brute force search times and reduced time complexity by avoiding collisions during random pixel selection. The proposed method uses three parameters that are optimized through experimental analysis to minimize distortions, increase cipher text scattering, and reduce collisions and time complexity. Results demonstrate the technique maintains good perceptual quality while improving upon previous methods.
Performance comparison of mobile ad hoc network routing protocolsIJCNCJournal
Mobile Ad-hoc Network (MANET) is an infrastructure less and decentralized network which need a robust
dynamic routing protocol. Many routing protocols for such networks have been proposed so far to find
optimized routes from source to the destination and prominent among them are Dynamic Source Routing
(DSR), Ad-hoc On Demand Distance Vector (AODV), and Destination-Sequenced Distance Vector (DSDV)
routing protocols. The performance comparison of these protocols should be considered as the primary
step towards the invention of a new routing protocol. This paper presents a performance comparison of
proactive and reactive routing protocols DSDV, AODV and DSR based on QoS metrics (packet delivery
ratio, average end-to-end delay, throughput, jitter), normalized routing overhead and normalized MAC
overhead by using the NS-2 simulator. The performance comparison is conducted by varying mobility
speed, number of nodes and data rate. The comparison results show that AODV performs optimally well
not the best among all the studied protocols.
Comparative performance analysis of different modulation techniques for papr ...IJCNCJournal
One of the most important multi-carrier tran
smission techniques used in the latest wireless com
munication
arena is known as Orthogonal Frequency Division Mul
tiplexing (OFDM). It has several characteristics
such as providing greater immunity to multipath fad
ing & impulse noise, eliminating Inter Symbol
Interference (ISI) & Inter Carrier Interference (IC
I) using a guard interval known as Cyclic Prefix (C
P). A
regular difficulty of OFDM signal is high peak to a
verage power ratio (PAPR) which is defined as the r
atio
of the peak power to the average power of OFDM Sign
al. An improved design of amplitude clipping &
filtering technique of us previously reduced signif
icant amount of PAPR with slightly increase bit err
or rate
(BER) compare to an existing method in case of Quad
rature Phase Shift Keying (QPSK) & Quadrature
Amplitude Modulation (QAM). This paper investigates
a comparative performance analysis of the differen
t
higher order modulation techniques on that design.
Analysis of wifi and wimax and wireless network coexistenceIJCNCJournal
Wireless networks are very popular nowadays. Wireless Local Area Network (WLAN) that uses the IEEE 802.11 standard and WiMAX (Worldwide Interoperability for Microwave Access) that uses the IEEE802.16 standard are networks that we want to explore. WiMAX has been developed over 10 years, but it is still unknown by most people. However, compared with WLAN, it has many advantages in transmission speed and coverage area. This paper will introduce these two technologies and make comparisons between WiMAX and WiFi. In addition, wireless network coexistence of WLAN and WiMAX will be explored through simulation. Lastly we want to discuss the future of WiMAX in relation to WiFi.
Network coding combined with onion routing for anonymous and secure communica...IJCNCJournal
This paper presents a novel scheme that provides high level of security and privacy in a Wireless Mesh
Network (WMN). We combine an approach of Network Coding with multiple layered encryption of onion routing for a WMN. An added superior feature provides higher level of security and privacy. Sensitive network information is confined to 1-hop neighborhood which is available anyways in a wireless medium with nodes using a bivariate polynomial. The only routing information divulged to a relay node is about next hop. No plain text is ever transmitted and all data can only be decrypted by its source and destination.Prior work finds it difficult to enforce encryption with network coding without divulging in complete
routing information,hence losing privacy and anonymity. We compare our scheme with other existing approach for several networks. The preliminary results show this work to provide superior security and anonymity at low overhead cost.
The document describes a pairwise key establishment scheme for ad hoc networks. It proposes using cellular automata rules to dynamically establish shared keys between two nodes. Each node sends either a cellular automata rule or the initialization parameters to the other node. The receiving node then uses the rule and parameters along with cellular automata computations to independently derive the shared key. This allows keys to be established dynamically without transmitting the actual keys or requiring an online server. The scheme aims to provide secure communication through pairwise key establishment while being computationally efficient and not relying on predistributed keys.
1) The document presents a mathematical model for optimizing the value of k in a k-fold multicast network under different traffic loads.
2) The model derives the stationary distribution of network states and develops expressions for throughput and blocking probability.
3) Results show that network throughput increases significantly as k increases up to a point, after which throughput levels off, and blocking probability levels off after a certain k value as well. An optimum k value minimizes blocking probability for a given traffic load.
EFFECTS OF FILTERS ON DVB-T RECEIVER PERFORMANCE UNDER AWGN, RAYLEIGH, AND RI...IJCNCJournal
This document discusses filters used in DVB-T receivers and their effect on performance under different channel conditions. It investigates DVB-T system performance under AWGN, Rayleigh, and Ricean fading channels. It also examines using Butterworth, Chebyshev, and elliptic filters in the receiver to improve performance. The simulation results show that carefully selecting the receiver filter based on the channel conditions can optimize DVB-T system performance.
MODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTIONijwmn
Multiple-input multiple-output (MIMO) systems are playing an important role in the recent wireless
communication. The complexity of the different systems models challenge different researches to get a good
complexity to performance balance. Lattices Reduction Techniques and Lenstra-Lenstra-Lovàsz (LLL)
algorithm bring more resources to investigate and can contribute to the complexity reduction purposes.
In this paper, we are looking to modify the LLL algorithm to reduce the computation operations by
exploiting the structure of the upper triangular matrix without “big” performance degradation. Basically,
the first columns of the upper triangular matrix contain many zeroes, so the algorithm will perform several
operations with very limited income. We are presenting a performance and complexity study and our
proposal show that we can gain in term of complexity while the performance results remains almost the
same.
This document discusses performance of matching algorithms for signal approximation. It begins by introducing matching pursuit algorithms like Orthogonal Matching Pursuit (OMP) and Stagewise Orthogonal Matching Pursuit (StOMP) which are greedy algorithms that approximate sparse signals. It then describes the Non-Negative Least Squares algorithm which solves non-negative least squares problems. Finally, it discusses Extranious Equivalent Detection (EED), a modification of OED that incorporates non-negativity of representations by using a non-negative optimization technique instead of orthogonal projection.
Performance of Matching Algorithmsfor Signal Approximationiosrjce
The document summarizes and compares several algorithms for signal approximation and sparse signal recovery, including Equivalent Detection (ED), Non-negative Equivalent Detection, Orthogonal Matching Pursuit (OMP), and Stagewise Orthogonal Matching Pursuit (StOMP). It discusses how each algorithm works, including iteratively selecting atoms from a dictionary to build up a sparse representation of the signal. OMP selects one atom per iteration while StOMP selects all atoms above a threshold. The document also discusses computational complexities of the different algorithms.
Iaetsd a novel scheduling algorithms for mimo based wireless networksIaetsd Iaetsd
This document proposes new scheduling algorithms for MIMO wireless networks to improve system performance. It discusses designing practical user scheduling algorithms to maximize capacity in MIMO systems. Various MAC scheduling policies are implemented and modified to provide distributed traffic control, robustness against interference, and increased efficiency of resource utilization. Simulations using MATLAB compare the different policies and draw important results and conclusions. The paper suggests new priority scheduling and partially fair scheduling algorithms incorporating awareness of interference to improve system-level performance in MIMO wireless networks.
Fault Tolerant Matrix Pencil Method for Direction of Arrival Estimationsipij
Continuing to estimate the Direction-of-arrival (DOA) of the signals impinging on the antenna array, even when a few elements of the underlying Uniform Linear Antenna Array (ULA) fail to work will be of practical interest in RADAR, SONAR and Wireless Radio Communication Systems. This paper proposes a new technique to estimate the DOAs when a few elements are malfunctioning. The technique combines Singular Value Thresholding (SVT) based Matrix Completion (MC) procedure with the Direct Data Domain (D3) based Matrix Pencil (MP) Method. When the element failure is observed, first, the MC is performed to recover the missing data from failed elements, and then the MP method is used to estimate the DOAs. We also, propose a very simple technique to detect the location of elements failed, which is required to perform MC procedure. We provide simulation studies to demonstrate the performance and usefulness of the proposed technique. The results indicate a better performance, of the proposed DOA estimation scheme under different antenna failure scenarios.
Drobics, m. 2001: datamining using synergiesbetween self-organising maps and...ArchiLab 7
The document describes a three-stage approach to data mining that uses self-organizing maps, clustering, and fuzzy rule induction. In the first stage, a self-organizing map is used to reduce the data size while preserving topology. In the second stage, clustering identifies regions of interest. In the third stage, fuzzy rules are generated to describe the clusters. The approach was tested on image and real-world datasets and produced intuitive results.
Discrete wavelet transform-based RI adaptive algorithm for system identificationIJECEIAES
In this paper, we propose a new adaptive filtering algorithm for system identifica- tion. The algorithm is based on the recursive inverse (RI) adaptive algorithm which suffers from low convergence rates in some applications; i.e., the eigenvalue spread of the autocorrelation matrix is relatively high. The proposed algorithm applies discrete-wavelet transform (DWT) to the input signal which, in turn, helps to overcome the low convergence rate of the RI algorithm with relatively small step-size(s). Different scenarios has been investigated in different noise environments in system identification setting. Experiments demonstrate the advantages of the proposed DWT recursive inverse (DWT-RI) filter in terms of convergence rate and mean-square-error (MSE) compared to the RI, discrete cosine transform LMS (DCT-LMS), discretewavelet transform LMS (DWT-LMS) and recursive-least-squares (RLS) algorithms under same conditions.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
PERFORMANCE EVALUATION OF ADAPTIVE ARRAY ANTENNAS IN COGNITIVE RELAY NETWORKcsijjournal
1) Adaptive array antennas in cognitive relay networks are proposed to improve system performance by reducing symbol error rate. Wiener solution and least mean square algorithms are explored to calculate optimal antenna weights.
2) Simulations show that placing adaptive array antennas at the source node provides better performance than at the relay or destination nodes. Additional enhancements include increasing the source gain, decreasing the relay gain, and using more antenna elements.
3) Under different channel conditions, adaptive array antennas perform best under additive white Gaussian noise and improve under Rayleigh fading by increasing the step size or number of iterations in the least mean square algorithm.
The document discusses an algorithm called Adaptive Multichannel Component Analysis (AMMCA) for separating image sources from mixtures using adaptively learned dictionaries. It begins by reviewing image denoising using learned dictionaries, then extends this to image separation from single mixtures. The key contribution is applying this approach to separating sources from multichannel mixtures by learning local dictionaries for each source during the separation process. The algorithm is described and simulated results are shown separating two images from a noisy mixture using the learned dictionaries. In conclusion, AMMCA is able to separate sources without prior knowledge of their sparsity domains by fusing dictionary learning into the separation process.
The document describes the implementation of a wideband spectrum sensing algorithm using a software-defined radio. It discusses using an energy detection based approach to sense the local frequency spectrum and determine which portions are unused. The algorithm is first tested via simulations in MATLAB using known signal parameters. It is then tested using real data collected from a Universal Software Radio Peripheral (USRP) to analyze the actual wireless spectrum.
The document discusses denoising techniques for images captured by single-sensor digital cameras using a color filter array (CFA). It compares principal component analysis (PCA) and independent component analysis (ICA) based denoising of CFA images. PCA and ICA are linear adaptive transforms that can be used to represent image data in a way that better distinguishes signal from noise. The document outlines the PCA and ICA algorithms and discusses how K-means clustering can be used with them. It generates noise to add to a reference image and implements PCA and ICA based denoising in MATLAB. Performance is evaluated using metrics like PSNR, WPSNR, SSIM and correlation coefficient.
Energy Efficient Power Failure Diagonisis For Wireless Network Using Random G...idescitation
This paper deals with the discussion of an innovative and a design for the
efficient power management and power failure diagnosis in the area of wireless sensors
networks. A Wireless Network consists of a web of networks where hundreds of pairs are
connected to each other wirelessly. A critical issue in the wireless sensor networks in the
present scenario is the limited availability of energy within network nodes. Therefore,
making good use of energy is necessary in modeling a sensor network. In this paper we have
tried to propose a new model of wireless sensors networks on a three-dimensional plane
using the percolation model, a kind of random graph in which edges are formed between the
neighboring nodes. An algorithm has been described in which the power failure diagnosis is
made and solved. This paper also involves proper selection of the ideal networks by the
concepts of Random Graph theory. The paper involves the mathematics of complete fault
diagnosis including solving the problem and continuing the process flow.
DCE: A NOVEL DELAY CORRELATION MEASUREMENT FOR TOMOGRAPHY WITH PASSIVE REAL...ijdpsjournal
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
PERFORMANCE EVALUATION OF ADAPTIVE ARRAY ANTENNAS IN COGNITIVE RELAY NETWORK csijjournal
Adaptive Array Antennas (AAAs) are expected to play a key role in meeting the demands of the wireless communication systems of the future. AAAs in cognitive relay network is proposed to reduce the symbol error rate and improve the system performance. Many algorithms such as wiener solution and least mean square (LMS) will be explained to show how AAAs in cognitive relay network achieves this object. AAAs at different locations will be investigated under AWGN and Rayleigh fading channel. Moreover, enhancement the system performance by showing the effect of increasing the number of AAAs element at the relay node, increasing the source gain and decreasing the relay gain. In addition, increasing the rate adaptation and number of iterations in LMS algorithm has significant improvement in the system.
Performance Evaluation Of Adaptive Array Antennas In Cognitive Relay Networkcsijjournal
Adaptive Array Antennas (AAAs) are expected to play a key role in meeting the demands of the wireless communication systems of the future. AAAs in cognitive relay network is proposed to reduce the symbol error rate and improve the system performance. Many algorithms such as wiener solution and least mean square (LMS) will be explained to show how AAAs in cognitive relay network achieves this object. AAAs
at different locations will be investigated under AWGN and Rayleigh fading channel. Moreover, enhancement the system performance by showing the effect of increasing the number of AAAs element at the relay node, increasing the source gain and decreasing the relay gain. In addition, increasing the rate adaptation and number of iterations in LMS algorithm has significant improvement in the system.
Performance Evaluation of Adaptive Array Antennas in Cognitive Relay Networkcsijjournal
Adaptive Array Antennas (AAAs) are expected to play a key role in meeting the demands of the wireless communication systems of the future. AAAs in cognitive relay network is proposed to reduce the symbol error rate and improve the system performance. Many algorithms such as wiener solution and least mean square (LMS) will be explained to show how AAAs in cognitive relay network achieves this object. AAAs at different locations will be investigated under AWGN and Rayleigh fading channel. Moreover, enhancement the system performance by showing the effect of increasing the number of AAAs element at
the relay node, increasing the source gain and decreasing the relay gain. In addition, increasing the rate adaptation and number of iterations in LMS algorithm has significant improvement in the system.
FPGA Design & Simulation Modeling of Baseband Data Transmission SystemIOSR Journals
This document describes the design, simulation, and modeling of a baseband data transmission system using FPGA and MATLAB. It presents the theoretical background of baseband signaling and matched filters. A system is implemented with an RC filter for pre-detection filtering to improve the bit error rate. The complete system is simulated in MATLAB and results are compared to theoretical predictions. The simulation agrees with theory and shows a 2.35 dB degradation when using an RC filter compared to an optimal matched filter. The combination of theoretical background, simulation, and experimental results helps strengthen student understanding.
This document provides contact information for VENSOFT Technologies and describes 25 MATLAB projects for the 2013-2014 academic year related to signal processing topics such as phase noise estimation in MIMO systems, distributed averaging algorithms, channel estimation, computation of the moment generating function for lognormal distributions, compressed sensing of EEG data, and compressed sensing for wireless monitoring of fetal ECG signals. The contact for projects is provided as VENSOFT Technologies, their website, and a phone number.
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
Similar to Exact network reconstruction from consensus signals and one eigen value (20)
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation w...IJCNCJournal
Paper Title
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation with Hybrid Beam Forming Power Transfer in WSN-IoT Applications
Authors
Reginald Jude Sixtus J and Tamilarasi Muthu, Puducherry Technological University, India
Abstract
Non-Orthogonal Multiple Access (NOMA) helps to overcome various difficulties in future technology wireless communications. NOMA, when utilized with millimeter wave multiple-input multiple-output (MIMO) systems, channel estimation becomes extremely difficult. For reaping the benefits of the NOMA and mm-Wave combination, effective channel estimation is required. In this paper, we propose an enhanced particle swarm optimization based long short-term memory estimator network (PSOLSTMEstNet), which is a neural network model that can be employed to forecast the bandwidth required in the mm-Wave MIMO network. The prime advantage of the LSTM is that it has the capability of dynamically adapting to the functioning pattern of fluctuating channel state. The LSTM stage with adaptive coding and modulation enhances the BER.PSO algorithm is employed to optimize input weights of LSTM network. The modified algorithm splits the power by channel condition of every single user. Participants will be first sorted into distinct groups depending upon respective channel conditions, using a hybrid beamforming approach. The network characteristics are fine-estimated using PSO-LSTMEstNet after a rough approximation of channels parameters derived from the received data.
Keywords
Signal to Noise Ratio (SNR), Bit Error Rate (BER), mm-Wave, MIMO, NOMA, deep learning, optimization.
Volume URL: https://airccse.org/journal/ijc2022.html
Abstract URL:https://aircconline.com/abstract/ijcnc/v14n5/14522cnc05.html
Pdf URL: https://aircconline.com/ijcnc/V14N5/14522cnc05.pdf
#scopuspublication #scopusindexed #callforpapers #researchpapers #cfp #researchers #phdstudent #researchScholar #journalpaper #submission #journalsubmission #WBAN #requirements #tailoredtreatment #MACstrategy #enhancedefficiency #protrcal #computing #analysis #wirelessbodyareanetworks #wirelessnetworks
#adhocnetwork #VANETs #OLSRrouting #routing #MPR #nderesidualenergy #korea #cognitiveradionetworks #radionetworks #rendezvoussequence
Here's where you can reach us : ijcnc@airccse.org or ijcnc@aircconline.com
June 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Abstract: Accurate latency computation is essential for the Internet of Things (IoT) since the connected
devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not
an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while
still allowing communication with the cloud. Many applications rely on fog computing, including traffic
management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to
address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog
computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the
simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses
various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which
are essential for evaluating the performance of the ITCMS. Our system model is also compared with other
models to assess its performance. A comparison is made using two parameters, namely throughput and the
total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend
Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system
outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Call for Papers -International Journal of Computer Networks & Communications ...IJCNCJournal
International Journal of Computer Networks & Communications (IJCNC)
Citations, h-index, i10-index of IJCNC
---- Scopus, ERA Listed, WJCI Indexed ----
Scopus Cite Score 2022--1.8
https://airccse.org/journal/ijcnc.html
IJCNC is listed in ERA 2023 as per the Australian Research Council (ARC) Journal Ranking
Scope & Topics
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the Computer Networks & Communications.
Topics of Interest
· Network Protocols & Wireless Networks
· Network Architectures
· High speed networks
· Routing, switching and addressing techniques
· Next Generation Internet
· Next Generation Web Architectures
· Network Operations & management
· Adhoc and sensor networks
· Internet and Web applications
· Ubiquitous networks
· Mobile networks & Wireless LAN
· Wireless Multimedia systems
· Wireless communications
· Heterogeneous wireless networks
· Measurement & Performance Analysis
· Peer to peer and overlay networks
· QoS and Resource Management
· Network Based applications
· Network Security
· Self-Organizing Networks and Networked Systems
· Optical Networking
· Mobile & Broadband Wireless Internet
· Recent trends & Developments in Computer Networks
Paper Submission
Authors are invited to submit papers for this journal through E-mail: ijcnc@airccse.org or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
· Submission Deadline : June 22, 2024
· Notification : July 22, 2024
· Final Manuscript Due : July 29, 2024
· Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us: ijcnc@airccse.org or ijcnc@aircconline.com
For other details please visit - http://airccse.org/journal/ijcnc.html
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Blockchain Enforced Attribute based Access Control with ZKP for Healthcare Se...IJCNCJournal
The relationship between doctors and patients is reinforced through the expanded communication channels provided by remote healthcare services, resulting in heightened patient satisfaction and loyalty. Nonetheless, the growth of these services is hampered by security and privacy challenges they confront. Additionally, patient electronic health records (EHR) information is dispersed across multiple hospitals in different formats, undermining data sovereignty. It allows any service to assert authority over their EHR, effectively controlling its usage. This paper proposes a blockchain enforced attribute-based access control in healthcare service. To enhance the privacy and data-sovereignty, the proposed system employs attribute-based access control, zero-knowledge proof (ZKP) and blockchain. The role of data within our system is pivotal in defining attributes. These attributes, in turn, form the fundamental basis for access control criteria. Blockchain is used to keep hospital information in public chain but EHR related data in private chain. Furthermore, EHR provides access control by using the attributed based cryptosystem before they are stored in the blockchain. Analysis shows that the proposed system provides data sovereignty with privacy provision based on the attributed based access control.
EECRPSID: Energy-Efficient Cluster-Based Routing Protocol with a Secure Intru...IJCNCJournal
A revolutionary idea that has gained significance in technology for Internet of Things (IoT) networks backed by WSNs is the " Energy-Efficient Cluster-Based Routing Protocol with a Secure Intrusion Detection" (EECRPSID). A WSN-powered IoT infrastructure's hardware foundation is hardware with autonomous sensing capabilities. The significant features of the proposed technology are intelligent environment sensing, independent data collection, and information transfer to connected devices. However, hardware flaws and issues with energy consumption may be to blame for device failures in WSN-assisted IoT networks. This can potentially obstruct the transfer of data. A reliable route significantly reduces data retransmissions, which reduces traffic and conserves energy. The sensor hardware is often widely dispersed by IoT networks that enable WSNs. Data duplication could occur if numerous sensor devices are used to monitor a location. Finding a solution to this issue by using clustering. Clustering lessens network traffic while retaining path dependability compared to the multipath technique. To relieve duplicate data in EECRPSID, we applied the clustering technique. The multipath strategy might make the provided protocol more dependable. Using the EECRPSID algorithm, will reduce the overall energy consumption, minimize the End-to-end delay to 0.14s, achieve a 99.8% Packet Delivery Ratio, and the network's lifespan will be increased. The NS2 simulator is used to run the whole set of simulations. The EECRPSID method has been implemented in NS2, and simulated results indicate that comparing the other three technologies improves the performance measures.
Analysis and Evolution of SHA-1 Algorithm - Analytical TechniqueIJCNCJournal
A 160-bit (20-byte) hash value, sometimes called a message digest, is generated using the SHA-1 (Secure Hash Algorithm 1) hash function in cryptography. This value is commonly represented as 40 hexadecimal digits. It is a Federal Information Processing Standard in the United States and was developed by the National Security Agency. Although it has been cryptographically cracked, the technique is still in widespread usage. In this work, we conduct a detailed and practical analysis of the SHA-1 algorithm's theoretical elements and show how they have been implemented through the use of several different hash configurations.
Optimizing CNN-BiGRU Performance: Mish Activation and Comparative AnalysisIJCNCJournal
Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
An Hybrid Framework OTFS-OFDM Based on Mobile Speed EstimationIJCNCJournal
The Future wireless communication systems face the challenging task of simultaneously providing high-quality service (QoS) and broadband data transmission, while also minimizing power consumption, latency, and system complexity. Although Orthogonal Frequency Division Multiplexing (OFDM) has been widely adopted in 4G and 5G systems, it struggles to cope with a significant delay and Doppler spread in high mobility scenarios. To address these challenges, a novel waveform named Orthogonal Time Frequency Space (OTFS). Designers aim to outperform OFDM by closely aligning signals with the channel behaviour. In this paper, we propose a switching strategy that empowers operators to select the most appropriate waveform based on an estimated speed of the mobile user. This strategy enables the base station to dynamically choose the waveform that best suits the mobile user’s speed. Additionally, we suggest retaining an Integrated Sensing and Communication (ISAC) radar approach for accurate Doppler estimation. This provides precise information to facilitate the waveform selection procedure. By leveraging the switching strategy and harnessing the Doppler estimation capabilities of an ISAC radar.Our proposed approach aims to enhance the performance of wireless communication systems in high mobility cases. Considering the complexity of waveform processing, we introduce an optimized hybrid system that combines OTFS and OFDM, resulting in reduced complexity while still retaining performance benefits.This hybrid system presents a promising solution for improving the performance of wireless communication systems in higher mobility.The simulation results validate the effectiveness of our approach, demonstrating its potential advantages for future wireless communication systems. The effectiveness of the proposed approach is validated by simulation results as it will be illustrated.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Accurate latency computation is essential for the Internet of Things (IoT) since the connected devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while still allowing communication with the cloud. Many applications rely on fog computing, including traffic management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which are essential for evaluating the performance of the ITCMS. Our system model is also compared with other models to assess its performance. A comparison is made using two parameters, namely throughput and the total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
May 2024, Volume 16, Number 3 - The International Journal of Computer Network...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
Exact network reconstruction from consensus signals and one eigen value
1. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
DOI : 10.5121/ijcnc.2013.5507 83
EXACT NETWORK RECONSTRUCTION
FROM CONSENSUS SIGNALS AND ONE
EIGENVALUE
Enzo Fioriti1
, Stefano Chiesa1
, Fabio Fratichini1
1
ENEA, CR Casaccia, UTTEI-ROB, Roma, Italy
vincenzo.fioriti@enea.it
ABSTRACT
The basic inverse problem in spectral graph theory consists in determining the graph given its eigenvalue
spectrum. In this paper, we are interested in a network of technological agents whose graph is unknown,
communicating by means of a consensus protocol. Recently, the use of artificial noise added to consensus
signals has been proposed to reconstruct the unknown graph, although errors are possible. On the other
hand, some methodologies have been devised to estimate the eigenvalue spectrum, but noise could interfere
with the elaborations. We combine these two techniques in order to simplify calculations and avoid
topological reconstruction errors, using only one eigenvalue. Moreover, we use an high frequency noise to
reconstruct the network, thus it is easy to filter the control signals after the graph identification. Numerical
simulations of several topologies show an exact and robust reconstruction of the graphs.
KEYWORDS
Networks, Graph, Topology, Eigenvalue spectrum
1. INTRODUCTION
Given a network of interacting agents we describe a methodology able to reconstruct exactly the
graph from time series and one eigenvalue of the graph spectrum (the inverse problem for
technological networks), using recently developed signal analysis and algebraic graph theory
techniques. There are many reasons to be aware of the graph topology. For example, if the graph
is time-varying the knowledge of the topology is clearly of paramount relevance for the network
control, especially in the mobile sensor network case. The next generation of technological
networks will be equipped with topology discover methodologies.
Moreover, from a mathematical point of view, the knowledge of the adjacency or laplacian matrix
of the network allows a relatively easy calculation (at least for small-medium size graphs) of
fundamental parameters. To name only a few: the second largest laplacian eigenvalue (the Fiedler
or algebraic eigenvalue) and the maximum eigenvalue of the adjacency matrix are two well-
known parameters relevant to the connectivity, the information exchange and control.
bottlenecks are easily detected; most influential nodes can be identified.
However, a fundamental obstacle to the solution of this inverse problem is the absence of one-to-
one correspondence between graph and its spectrum: different graphs can have the same spectrum
[1]. Although Camellas [2] has recently obtained some useful results with combinatorial
optimization algorithms for the main topological parameters (diameter, average distance, degree
distribution, clustering), the complete graph identification remain unsolved. As a theoretical
2. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
84
solution seems hard to find, we propose to take advantage of specific characteristics of
technological networks to reduce the complexity of the inverse problem. In particular, if the graph
variability is associated to the agent mobility, a common control
algorithm is the consensus protocol [4]. The consensus protocol is widely implemented because it
is a distributed calculation, i.e. each agent is required to communicate only with its closest
neighbours. In this paper, we consider the consensus as the main control tool of the network,
though many different situation are equally possible, and following [3], we use an artificial added
noise for a basic estimation of the topology. Moreover, we take advantage from the (approximate)
knowledge of one eigenvalue of the graph spectrum, let's say the maximum eigenvalue of the
laplacian matrix λN to eliminate the errors. Some distributed procedures are already available [8,
10] to estimate the eigenvalue spectrum, as explained in the following paragraphs. Nevertheless,
even the complete knowledge of the spectrum does not suffice to recover the graph, thus more
information are needed. These information may be provided by the consensus time-series in the
Ren's framework
In some cases these procedures are not mandatory, as the spectrum may be partially known in
advance, therefore here we consider λN derived by [8, 10] as a parameter of the problem affected
by error and will not include it in the simulations explicitly.
Finally, it should be noted that often the consensus protocol is necessary to the technological
networks, thus no further calculation encumbrance is imposed to the control system. Since also
the spectrum may be calculated locally, we have two distributed elaborations, while the final
calculation is a centralized one because it requires to collect data from all the nodes and send back
the results to every node.
2. METHODS
Recently, a method to recover the laplacian matrix L of the a network of dynamical coupled
systems has been given by Ren [3]. Starting from the general form of the i-th differential system:
xi’ = Fi( xi )
i = 1, ... N, and adding couplings and noise we have:
xi’ = Fi (xi) – c Σj Lij H(xj) + ηi (1)
i, j = 1, ... N , where c is the coupling coefficient (here c = 1), H the coupling functions, x the state
variables, η the white gaussian noise with strength σ2
, Lij are the entries of the laplacian matrix L
derived from the undirected graph of the systems. Vectors and matrices are in bold. The laplacian
matrix is defined as:
L = D – A
where D is a diagonal matrix formed by the node degrees and A is the adjacency matrix (1 if a
link i-j exists, 0 otherwise) of the graph. Therefore we are considering a graph whose nodes are
technological agents represented by a differential equations coupled to other nodes through
communication links (the edges of the graph) according to an unknown topology.
The very interesting point in (1) is that the artificial added noise enables the solution of the
inverse problem: given the time series, reconstruct the graph. We focus on the standard consensus
form of (1):
3. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
85
xi’ = Σj aij(xj -xi) + ρi , j = 1, ... N (2)
where aij are the entries of the adjacency matrix A. Differently from [3] we consider a high
frequency (HF) noise ρ instead of the white noise η; moreover, the noise strength is decreased,
because in a real environment to transmit a signal requires energy.
It is known that for a connected network, the equilibrium point for (2) is globally exponentially
stable and the consensus value is equal to the average of the initial values, thus solutions do exist.
In compact form (2) is written:
x’ = - Lx + ρ
Expression (2) and similar are utilized to coordinate the states of the agents on a common
position/velocity agreement resilient to disturbs [6, 7, 9, 14, 15, 16].
Now we give the main points of the Ren's mathematical procedure to derive the expression of the
dynamical correlation matrix that solves the inverse problem. Details can be found in [3, 13]. Let
us consider an undirected graph of the dynamical system (1); using small perturbations and
substituting the original variables
x°i = xi + ξi , i= 1, 2, ... N
we obtain
ξ' = [ JF ( x°) − L^
⊗ JH ( x°)] ξ + ρ
where L^
is the laplacian coupling matrix, JH,F is the Jacobian matrix of coupling functions.
The dynamical correlation among the consensus signals is called
C^
= <ξ ξT
>
After some calculations we have
A^
C^
+ C^
A^T
= <ρ ξ T
> / 2 (3)
with A^
= − JF^( x° ) + L^
⊗ J H^ ( x° )
(3) can be simplified as
L^
C^
+ C ˆL^T
= Iσ2
/ 2
and therefore
C^
~ L+
(σ2
/2) (4)
where C^
is the dynamical correlation matrix among the time series between node i and node j, +
indicates the Moore - Penrose pseudoinverse, σ2
the noise power. Note that (3) requires the
knowledge of all time series to calculate C^
, hence the reconstruction is centralized. Authors of
[3] find a one - to - one correspondence between the correlation matrix of time series from nodes
and the laplacian matrix. This remarkable, counter intuitive finding actually allows to set a
threshold for the entries of C^
: below it, the entries are considered -1, above 0, thus the laplacian
4. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
86
matrix and consequently the adjacency matrix, is reconstructed. The threshold procedure is not
immediate to implement, anyway in [3] it is claimed a very good success rate.
Albeit no physical explanation of the phenomenon is claimed simulations show good results,
nevertheless, some errors are reported to remain.
Note that the Ren' s algorithm is more accurate if the average degree is large, but the energy
saving needs of the signal transmission apparatus require the average degree (i.e. the number of
communication links) to be kept as low as possible. As a consequence, in a real environment the
C^
estimation could be not exact.
At the same time, the consensus signals are needed also to control the network and in this respect,
noise is a disturb to keep as small as possible. Therefore, bearing in mind these considerations, we
suggest a node to transmit the consensus signal added with HF, low power noise and to low-pass
the noisy signal received.
2.1 The Spectral Estimation
To reduce or eliminate the residual error in the graph reconstruction we need extra information.
To this end, a relevant help is the knowledge at least of some eigenvalues of the laplacian
spectrum. In other cases the graph is fixed and there is no need of topological variations, thus the
desired spectrum is known and only a periodic verification is required, but usually the graph
changes frequently and demands an on-line check.
The spectral reconstruction has been studied in [8, 10]. Franceschelli calculates a distributed Fast
Fourier Transform (FFT) of signals derived from a proper distributed protocol and received at a
node i
xi’ = zi + ∑j ( zi - zj )
zi’ = - xi - ∑j ( xi - xj )
with j ∈ Ni (neighbour nodes at one hop of distance from node i). Thus, the state trajectory is a
linear combination of sinusoids oscillating only at frequencies function of the eigenvalues of the
Laplacian matrix λj, and the amplitude of the peaks in the spectrogram are related to the
eigenvalues
|F (xi(t))| = 1/2∑j aij δ (f ±(1+ λj) / 2π )
|F (zi(t))| = 1/2∑j bij δ (f ±(1+ λj) / 2π )
This method has some drawbacks [11]: the multiplicities of the eigenvalues cannot be calculated
and the FFT suffers from the presence of noise. Remember that independently from the Ren’s
procedure, communications are always polluted by several sources of noise.
On the other hand, [10] provides an estimation of the laplacian spectrum based on matrix power
iteration, but this way only an approximate solution can be obtained. Thus we conclude that the
methodologies of [8, 10] and similar are not completely reliable.
Finally, it is worth noting that even if an exact spectrum reconstruction were available, today is
not clear if, even theoretically, it is possible to find a one-to-one relation with the adjacency
matrix [12]. Alternative combinatorial optimization techniques such as the tabu search or
5. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
87
simulated annealing are not exact and some of them would anyway require a long computation
time.
2.2 Error Reduction
Now let us consider that the largest lapalacian eigenvalue λN is available by means of one of the
previously described methods. It is intuitive to use it as a simple cost function, instead of the
threshold procedure, to determine the non null entries of the adjacency matrix recovered by (3).
Therefore in our methodology the matrix C^
is calculated from noisy consensus time-series and
normalized. Then, starting from a convenient value, an initial adjacency matrix A is produced
using (4), its largest laplacian eigenvalue λ*
N is calculated and subtracted to the supposed actual
eigenvalue λN :
min g(λ) = | λN - λ*
N |
and when the cost function reaches the minimum
g(λN) = 0 (5)
the actual matrix A is reconstructed (best results have been obtained with the largest eigenvalue,
although other eigenvalus may be used). In Figures 1a, b, c it is shown how the zero estimation
error of the eigenvalue is reached jointly with the complete reconstruction of the adjacency
matrix. On the ordinate the reconstruction error of the eigenvalue and of the adjacency matrix:
when the two vertical lines coincide, the reconstruction is done.
Figure 1a: Small World graph, 100 nodes (abscissa: time steps, ordinate: errors). Black dotted curve: actual
error percentage of the adjacency matrix entries (not including diagonal and symmetric elements),
continuous blue curve: the eigenvalue absolute error | λN - λ*
N |. Vertical red and green (green line is not
visible because is coincident with the red one) lines: exact reconstruction according to the (4). The
minimum value of the continuous blue curve indicates the correct topology reconstruction, i.e. zero errors.
Figure 1b: In this case two entries are wrong and the minimum indicated by the largest eigenvalue (green
dotted line), is no more coincident with the actual zero reconstruction error (dotted red line), see Table 2
also.
6. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
88
Figure 1c: Enlargement of the minimum area of Figure 3a.
If errors in the exact estimation of the maximum laplacian eigenvalue were present, the exact
reconstruction as in Figure 1a is still possible for a low – moderate amount of error, although the
acceptable error in the eigenvalue estimation increases quickly (see Table 2).
2.3 Noise Addition
For the methodology to work it is necessary the addition of noise to the consensus protocol. As
pointed out before, in a real environment it is already present a background of natural or artificial
noise, then the noise level is further increased. This does not undermine the methodology,
provided the strength of the added noise is large enough.
In order to save energy and allow the consensus signals to produce a proper control action, we
add a high frequency (HF), low amplitude, zero mean, unitary variance Gaussian noise to (1).
Noise strength in simulations is σ2
= 0.01, one order magnitude smaller with respect to [3]. In
Figure 2b is shown the HF noise and the signal power spectral density (psd) spectrum
(frequencies are normalized). In Figure 2b, c it is shown a noisy consensus signal and as it
appears after the low-pass filtering, once the signal has been received in a node. Aside the delay
due to the low-pass filter, the original signal is recovered (Figure 2c).
Figure 2a A consensus signal with HF noise in the time domain.
7. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
89
.
Figure 2b Power spectral density (psd) of the artificial HF noise added; left, psd of the original consensus
signal with noise added.
Figure 2c Noisy consensus solutions for a Small-World topology (N = 24, average degree 4) before the
low-pass filtering. Left: consensus time series after the low-pass filtering. The red dotted curve is the
original (no noise) time series.
2.4. Numerical Simulations
Numerical simulations have been conducted to validate the methodology, results are shown in
Table 1. The task is to recovery exactly all of the significant (N2
– N) / 2 entries of the adjacency
matrix A of the graph.
Four types of topologies have been considered: Erdos-Renyi (random, p = 0.01), Small-World
(average degree: 4, p = 0.1), pipeline (average degree: 4), grid (average degree: 4), N = 24, see
Figure 3. All time series have a length of 150 simulation time-steps (1500 samples) for N = 24;
the first 30 samples have been discarded because the transitory impair the calculations. As the
size (in nodes) increases, longer time series are needed. As an example, when the node size of a
Small-World (SW) graph is 100, about 370 simulation steps are needed to recover the graph.
Note that an higher noise level reduces the steps, bur increases the energy dissipation. The trade-
off should be analysed on an ad hoc basis. In Table 2 are shown the results for a large and a
small SW graphs in presence of errors on the estimation of the maximum laplacian eigenvalue,
obtained by the methods of [10] or [8].
The acceptable error on the maximum eigenvalue estimation (meaning that the number of
mistaken entries of A is still zero) increases as N increases. For example for N = 100, the 3.22%
8. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
90
estimation error means that the real value λN = 4.0375 is altered as much as: λN ± 0.13, but the
reconstruction of the matrix A remains exact. In the simulations, the centralized elaborations are
represented by the computation of the inverse correlation matrix C^
among all the time-series
received from the N nodes.
For each configuration a complete reconstruction (zero errors) has been achieved, see Table 1. In
particular, Small-World networks are very interesting as pointed out by [5], because of the high
consensus speed and connectedness properties, but the noise addition slows down the consensus
computations.
Table: 1 Simulation results
Graph
topology
Error Nodes Links Integration
steps
Erdos-Renyi 0 48 16 ~150
Small-World 0 24 48 ~150
Small-World 0 100 200 ~370
Pipeline 0 24 43 ~150
Grid 0 24 38 ~150
Grid 0 100 180 ~370
Since we have conducted the simulation with a high level language and a non optimized code, the
actual calculation time is not significant. In the real-world application a C or Java optimized
programming language or a Digital Signal Processor must be used, reducing the actual
calculation.
The small-world consensus scheme seems to be the fastest also for a low number of nodes. This is
interesting, because although it is known [5] that when a Small-World has a number of nodes N >
100 the convergence is very fast, for N = 24, as in our case, there is no previous guarantee.
Figure 3a Left to right graph topologies: regular grid, pipe-line. Each node is an agent, links are wireless
communication channels. The grid topology is the most regular, the SW is half-way between regularity and
randomness.
9. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
91
Figure 3b Left to right graph topologies: Small-World and random. The grid topology is the most regular,
the SW is half-way between regularity and randomness. Note the some disconnected nodes of the ER
random topology.
Table 2: Stability of solutions for SW topology
Graph
topology
Mistaken
entries
Nodes Overall
Entries
of A
Acceptable error in the λN
estimation
SW 0 100 4950 3.22%
SW 2 100 4950 7%
SW 0 24 276 0.22%
SW 2 24 276 15.2%
2.5 Effects of Delays On The Reconstruction
[13] has extended the Ren’s method to the quasi-uniform delay case. While the integration of
stochastic delayed differential equations is not simple, from a theoretical point of view the
procedure is similar to the zero delay case. Therefore, it is conceivable that the framework
discussed here is suited also for the delayed case. It should be mentioned that also the threshold
effects [17] may influences delays and have to be considered carefully.
Apart form the theoretical considerations, our methodology finds application in a number of real-
world technologies, as submarine robot swarms and mobile ad hoc networks [18], especially if
connectivity is a major concern [19].
3. CONCLUSIONS
One of the most important and unsolved problem in graph theory is the reconstruction of the
topology of technological networks. In fact, position sensors are often inaccurate, unable to work
properly and the graph may change suddenly. At the cost of a semi - centralized elaboration of the
consensus time series, we have shown how it is possible to achieve a complete topology
reconstruction. The methodology envisages the reconstruction of the graph using the noisy
signals of the consensus protocol. When received, signals are correlated and the resulting
10. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
92
correlation matrix is elaborated according to a simple relation to obtain the laplacian matrix.
Since the largest eigenvalue of the laplacian matrix can be estimated independently, although not
exactly, it is possible to calculate a cost function of the reconstruction. This information allows to
decide the adjacency matrix with zero or minimum error. The original consensus signals
necessary to the control are recovered by low-pass filtering, as noise is allocated in the relatively
high frequency band.
ACKNOWLEDGEMENTS
This work has been supported by the HARNESS Project, funded by the Italian Institute of
Technology (IIT).
REFERENCES
[1] Botti, P. Merris, R., (1993) “Almost all trees share a complete set of immanantal polynomials”,
Journal of graph theory, Vol. 17, p. 468.
[2] Comellas, F., Diaz-Lopez, J., (2012) “Spectral reconstruction of networks using combinatorial
optimization algorithms”, http://upcommons.upc.edu/e_prints/bitstream/2117/1037/1/CoDi07-
LapSpRecons.pdf.
[3] Ren, J., Wang, W.-X., Li, B., and Lai, Y.-C., (2010) “Noise Bridges Dynamical Correlation and
Topology in Coupled Oscillator Networks”, Phys. Rev. Lett., Vol. 104, pp. 058701.
[4] Olfati-Saber, R., and Murray, R. M., (2004) “Consensus Problem in Networks of Agents with
Switching Topology and Time-Delays”, IEEE Trans. Aut. Contr., Vol. 49, No. 9, pp. 1520-1531.
[5] Olfati_Saber, R., (2005) “Ultrafast Consensus”, Proceedings of the American Control Conference
IEEE, Los Alamitos, CA, pp. 2371-2378,.
[6] Olfati_Saber, R., Fax, J., Murray, R. M., (2007) “Consensus and Cooperation in Networked Multi-
Agent Systems”, Proceedings of IEEE, Vol. 95, No. 1.
[7] Dousse, O., Baccelli, F., and Thiran, P., (2005) “Impact of Interferences on Connectivity in Ad Hoc
Networks”, IEEE ACM Trans. Networking, Vol. 13, No. 2, pp. 425.
[8] Franceschelli, M., Gasparri, A., Giua, A., Seatzu, C., (2012) “Decentralized Estimation of Laplacian
Eigenvalues in Multi-gentSystems”, http://arxiv.org/pdf/1206.4509.pdf.
[9] Cucker, F. and Smale, S., (2007) “On the mathematics of the emergence, Jnp. J. Math., Vol. 2 No. 1,
pp. 197-227.
[10] Yang, P., Freeman, R. A., Gordon, G., Lynch, K., Srinivasa, S., and Sukthankar, R, (2008)
“Decentralized estimation and control of graph connectivity in mobile sensor networks”, American
Control Conference.
[11] Van Dam, E., and Haemers, W., (2003) “Which graphs are determined by their spectrum?”, Linear
Algebra and its Applications, Vol. 373, pp.241–272.
[12] Kibangou, A., Commault, C., (2012) Decentralize Laplacian Eigenvalues Estimation and
Collaborative Network Topology Identification, 3rd IFAC Workshop on Distributed Estimation and
Control in Networked Systems, NecSys '12, Santa Barbara.
[13] Wang, W. X., Ren, J., Lai, Y. C., Li. B., (2012) “Reverse engineering of complex dinamic networks
in the presence of time-delayed interaction based on time-series”, Chaos, Vol. 22, pp 033131.
[14] Traverso, F., Trucco, A. , Vernazza, G., (2012) Simulation of non-White and non-Gaussian
Underwater Ambient Noise, OCEANS’12 MTS/IEEE Yeosu Conference, Yeosu, South Korea.
[15] Bullo, F. Cortes, J., and Martınez, S. (2009) “Distributed Control of Robotic Networks”, Applied
Mathematics Series. Princeton University Press.
[16] Xi, J., Shi, Z., Zhong, Y., (2012), “Consensus and consensualization of high-order swarm systems
with time delays and external disturbances”, J. Dyn. Sys. Meas. , Vol. 134, No. 4.
[17] Dell’Isola, F. and Romano, A.. (1987) “A phenomenological approach to phase transition in classical
field theory”, International Journal of Engineering Science, Vol. 25 1469-1475.
[18] Meghanathan, N. and Terrel, M., (2012) “ A similation study on strong neighborhood stable
connected dominating set for mobile ad hoc networks”, International Journal of Computer Networks
& Communications, Vol.4, No.4, pp 1-19.
11. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
93
[19] Mathews, E. and Mathew, C., (2012) “Deployment of mobile routers ensuring coverage and
connectivity ”, International Journal of Computer Networks & Communications, Vol.4, No.1, pp 175-
191.
AUTHORS
Enzo Fioriti graduated in Automatic Control Engineering at La Sapienza Rome University.
He has worked in the ICT private sector and as researcher in Complex Systems, Neural
Networks Critical Infrastructures, Graph Theory at the ENEA CR Casaccia. Currently he is
interested in underwater robot swarm.
Stefano Chiesa received his bachelor's and the master degree in Computer Science from
Roma 3 University, Rome, in 2006 and 2010 respectively. Currently he is working at ENEA
(Italian national agency for new technologies, energy and sustainable economic development)
as research fellow. His research interest include swarm robotics, formation control, computer
vision, visual odometry.
Fabio Fratichini graduated in Electronic Engineering at La Sapienza Rome University. He
has worked as a researcher at ENEA (National agency for new technologies, energy and
sustainable economic development) research center. He is working as researcher in
robotics, sensor data fusion, Kalman filter. Currently he is interested in localization of
underwater robot swarm within the HARNESS Project.