Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
A Novel Space-time Discontinuous Galerkin Method for Solving of One-dimension...TELKOMNIKA JOURNAL
In this paper we propose a high-order space-time discontinuous Galerkin (STDG) method for
solving of one-dimensional electromagnetic wave propagations in homogeneous medium. The STDG
method uses finite element Discontinuous Galerkin discretizations in spatial and temporal domain
simultaneously with high order piecewise Jacobi polynomial as the basis functions. The algebraic
equations are solved using Block Gauss-Seidel iteratively in each time step. The STDG method is
unconditionally stable, so the CFL number can be chosen arbitrarily. Numerical examples show that the
proposed STDG method is of exponentially accuracy in time.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
Chapter provides an overview of a number of methods for visualizing tensor data. It explains principal component analysis as a technique used to process a tensor matrix and extract from it information that can directly be used in its visualization. It forms a fundamental part of many tensor data processing and visualization algorithms. Section 7.4 shows how the results of the principal component analysis can be visualized using the simple color-mapping techniques. Next parts of the chapter explain how same data can be visualized using tensor glyphs, and streamline-like visualization techniques.
In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
A Novel Space-time Discontinuous Galerkin Method for Solving of One-dimension...TELKOMNIKA JOURNAL
In this paper we propose a high-order space-time discontinuous Galerkin (STDG) method for
solving of one-dimensional electromagnetic wave propagations in homogeneous medium. The STDG
method uses finite element Discontinuous Galerkin discretizations in spatial and temporal domain
simultaneously with high order piecewise Jacobi polynomial as the basis functions. The algebraic
equations are solved using Block Gauss-Seidel iteratively in each time step. The STDG method is
unconditionally stable, so the CFL number can be chosen arbitrarily. Numerical examples show that the
proposed STDG method is of exponentially accuracy in time.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
Chapter provides an overview of a number of methods for visualizing tensor data. It explains principal component analysis as a technique used to process a tensor matrix and extract from it information that can directly be used in its visualization. It forms a fundamental part of many tensor data processing and visualization algorithms. Section 7.4 shows how the results of the principal component analysis can be visualized using the simple color-mapping techniques. Next parts of the chapter explain how same data can be visualized using tensor glyphs, and streamline-like visualization techniques.
In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
In this chapter author discusses a number of popular visualization methods for vector datasets: vector glyphs, vector color-coding, displacement plots, stream objects, texture-based vector visualization, and the simplified representation of vector fields.
Section 6.5 presents stream objects, which use integral techniques to construct paths in vector fields. Section 6.7 discusses a number of strategies for simplified representation of vector datasets. Section 6.8 presents a number of illustrative visualization techniques for vector fields, which offer an alternative mechanism for simplified representation to the techniques discussed in Section 6.7 Chapter presents also feature detection methods, algorithm for computing separatrices on field’s topology, and top-down and bottom-up field decomposition methods.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
We presented a number of fundamental methods for visualizing scalar data: color mapping, contouring, slicing, and height plots. Color mapping assigns a color as a function of the scalar value at each point of a given domain. Contouring displays all points within a given two- or three-dimensional domain that have a given scalar value. Height plots deform the scalar dataset domain in a given direction as a function of the scalar data. The main advantages of these techniques are that they produce intuitive results, easily understood by users, and they are simple to implement. However, such techniques also have s number of restrictions.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
Identification of the Memory Process in the Irregularly Sampled Discrete Time...idescitation
In the present work, we have considered the daily
signal of Solar Radio flux of 2800 Hz cited by National
Geographic Data Center, USA during the period from 29 th
October, 1972 to 28th February, 2013. We have applied Savitzky-
Golay nonlinear phase filter on the present discrete signal to
denoise it and after denoising Finite Variance Scaling Method
has been applied to investigate memory pattern in this discrete
time variant signal. Our result indicates that the present signal
of solar radio flux is of short memory which may in turn
suggest the multi-periodic and/or pseudo-periodic behaviour
of the present signal.
A Counterexample to the Forward Recursion in Fuzzy Critical Path Analysis Und...ijfls
Fuzzy logic is an alternate approach for quantifying uncertainty relating to activity duration. The fuzzy
version of the backward recursion has been shown to produce results that incorrectly amplify the level of
uncertainty. However, the fuzzy version of the forward recursion has been widely proposed as an
approach for determining the fuzzy set of critical path lengths. In this paper, the direct application of the
extension principle leads to a proposition that must be satisfied in fuzzy critical path analysis. Using a
counterexample it is demonstrated that the fuzzy forward recursion when discrete fuzzy sets are used to
represent activity durations produces results that are not consistent with the theory presented. The
problem is shown to be the application of the fuzzy maximum. Several methods presented in the literature
are described and shown to provide results that are consistent with the extension principle.
A New Approach for Speech Enhancement Based On Eigenvalue Spectral SubtractionCSCJournals
In this paper, a phase space reconstruction-based method is proposed for speech enhancement. The method embeds the noisy signal into a high dimensional reconstructed phase space and uses Spectral Subtraction idea. The advantages of the proposed method are fast performance, high SNR and good MOS. In order to evaluate the proposed method, ten signals of TIMIT database mixed with the white additive Gaussian noise and then the method was implemented. The efficiency of the proposed method was evaluated by using qualitative and quantitative criteria.
HEATED WIND PARTICLE’S BEHAVIOURAL STUDY BY THE CONTINUOUS WAVELET TRANSFORM ...cscpconf
Nowadays Continuous Wavelet Transform (CWT) as well as Fractal analysis is generally used for the Signal and Image processing application purpose. Our current work extends the field of application in case of CWT as well as Fractal analysis by applying it in case of the agitated wind particle’s behavioral study. In this current work in case of the agitated wind particle, we have mathematically showed that the wind particle’s movement exhibits the “Uncorrelated” characteristics during the convectional flow of it. It is also demonstrated here by the Continuous Wavelet Transform (CWT) as well as the Fractal analysis with matlab 7.12 version
Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.
Peer-to-Peer streaming technology has become one of the major Internet applications as it offers the opportunity of broadcasting high quality video content to a large number of peers with low costs. It is widely accepted that with the efficient utilization of peers and server's upload capacities, peers can enjoy watching a high bit rate video with minimal end-to-end delay. In this paper, we present a practical scheduling algorithm that works in the challenging condition where no spare capacity is available, i.e., it maximally utilizes the resources and broadcasts the maximum streaming rate. Each peer contacts with only a small number of neighbours in the overlay network and autonomously subscribes to sub-streams according to a budget-model in such a way that the number of peers forwarding exactly one sub-stream will be maximized. The hop-count delay is also taken into account to construct a short depth trees. Finally, we show through simulation that peers dynamically converge to an efficient overlay structure with a short hop-count delay. Moreover, the proposed scheme gives nice features in the homogeneous case and overcomes SplitStream in all simulated scenarios.
PERFORMANCE ANALYSIS OF MULTI-PATH TCP NETWORKIJCNCJournal
MPTCP is proposed by IETF working group, it allows a single TCP stream to be split across multiple
paths. It has obvious benefits in performance and reliability. MPTCP has implemented in Linux-based
distributions that can be compiled and installed to be used for both real and experimental scenarios. In this
article, we provide performance analyses for MPTCP with a laptop connected to WiFi access point and 3G
cellular network at the same time. We prove experimentally that MPTCP outperforms regular TCP for
WiFi or 3G interfaces. We also compare four types of congestion control algorithms for MPTCP that are
also implemented in the Linux Kernel. Results show that Alias Linked Increase Congestion Control
algorithm outperforms the others in the normal traffic load while Balanced Linked Adaptation algorithm
outperforms the rest when the paths are shared with heavy traffic, which is not supported by MPTCP.
HIDING A MESSAGE IN MP3 USING LSB WITH 1, 2, 3 AND 4 BITSIJCNCJournal
Steganography is the art of hiding information in ways that prevent the detection of hidden messages. This
paper presentsa new method which randomly selects position in MP3 file to hide a text secret messageby
using Least Significant Bit (LSB) technique. The text secret message isused in start and ends locations a
unique signature or key.The methodology focuses to embed one bit, two bits, three bitsor four bits from
secret message into MP3 file by using LSB techniques. The evaluation and performancemethods are based
on robustness (BER and correlation), Imperceptibility (PSNR) and hiding capacity (Ratio between Sizes of
text message and MP3 Cover) indicators.The experimental results show the new method is more security.
Moreover the contribution of this paper is the provision of a robustness-based classification of LSB
steganography models depending on their occurrence in the embedding position.
PERFORMANCE ANALYSIS OF THE LINK-ADAPTIVE COOPERATIVE AMPLIFY-AND-FORWARD REL...IJCNCJournal
This paper analyzes the performance of cooperative amplify-and-forward (CAF) relay networks that
employ adaptive M-ary quadrature amplitude modulation (M-QAM)/M-ary phase shift keying (M-PSK)
digital modulation techniques in Nakagami-m fading channel. In particular, we present and compared the
analysis of CAF relay networks with different cooperative diversity and opportunistic routing strategies
such as regular Maximal Ratio Combining (MRC), Selection Diversity Combining (SDC), Opportunistic
Relay Selection with Maximal Ratio Combining (ORS-MRC) and Opportunistic Relay Selection with
Selection Diversity Combining (ORS-SDC). We advocate a simple yet unified numerical approach based on
the marginal moment generating function (MGF) of the total received SNR to compute the average symbol
error rate (ASER), mean achievable spectral efficiency, and outage probability performance metrics.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
In this chapter author discusses a number of popular visualization methods for vector datasets: vector glyphs, vector color-coding, displacement plots, stream objects, texture-based vector visualization, and the simplified representation of vector fields.
Section 6.5 presents stream objects, which use integral techniques to construct paths in vector fields. Section 6.7 discusses a number of strategies for simplified representation of vector datasets. Section 6.8 presents a number of illustrative visualization techniques for vector fields, which offer an alternative mechanism for simplified representation to the techniques discussed in Section 6.7 Chapter presents also feature detection methods, algorithm for computing separatrices on field’s topology, and top-down and bottom-up field decomposition methods.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
We presented a number of fundamental methods for visualizing scalar data: color mapping, contouring, slicing, and height plots. Color mapping assigns a color as a function of the scalar value at each point of a given domain. Contouring displays all points within a given two- or three-dimensional domain that have a given scalar value. Height plots deform the scalar dataset domain in a given direction as a function of the scalar data. The main advantages of these techniques are that they produce intuitive results, easily understood by users, and they are simple to implement. However, such techniques also have s number of restrictions.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
Identification of the Memory Process in the Irregularly Sampled Discrete Time...idescitation
In the present work, we have considered the daily
signal of Solar Radio flux of 2800 Hz cited by National
Geographic Data Center, USA during the period from 29 th
October, 1972 to 28th February, 2013. We have applied Savitzky-
Golay nonlinear phase filter on the present discrete signal to
denoise it and after denoising Finite Variance Scaling Method
has been applied to investigate memory pattern in this discrete
time variant signal. Our result indicates that the present signal
of solar radio flux is of short memory which may in turn
suggest the multi-periodic and/or pseudo-periodic behaviour
of the present signal.
A Counterexample to the Forward Recursion in Fuzzy Critical Path Analysis Und...ijfls
Fuzzy logic is an alternate approach for quantifying uncertainty relating to activity duration. The fuzzy
version of the backward recursion has been shown to produce results that incorrectly amplify the level of
uncertainty. However, the fuzzy version of the forward recursion has been widely proposed as an
approach for determining the fuzzy set of critical path lengths. In this paper, the direct application of the
extension principle leads to a proposition that must be satisfied in fuzzy critical path analysis. Using a
counterexample it is demonstrated that the fuzzy forward recursion when discrete fuzzy sets are used to
represent activity durations produces results that are not consistent with the theory presented. The
problem is shown to be the application of the fuzzy maximum. Several methods presented in the literature
are described and shown to provide results that are consistent with the extension principle.
A New Approach for Speech Enhancement Based On Eigenvalue Spectral SubtractionCSCJournals
In this paper, a phase space reconstruction-based method is proposed for speech enhancement. The method embeds the noisy signal into a high dimensional reconstructed phase space and uses Spectral Subtraction idea. The advantages of the proposed method are fast performance, high SNR and good MOS. In order to evaluate the proposed method, ten signals of TIMIT database mixed with the white additive Gaussian noise and then the method was implemented. The efficiency of the proposed method was evaluated by using qualitative and quantitative criteria.
HEATED WIND PARTICLE’S BEHAVIOURAL STUDY BY THE CONTINUOUS WAVELET TRANSFORM ...cscpconf
Nowadays Continuous Wavelet Transform (CWT) as well as Fractal analysis is generally used for the Signal and Image processing application purpose. Our current work extends the field of application in case of CWT as well as Fractal analysis by applying it in case of the agitated wind particle’s behavioral study. In this current work in case of the agitated wind particle, we have mathematically showed that the wind particle’s movement exhibits the “Uncorrelated” characteristics during the convectional flow of it. It is also demonstrated here by the Continuous Wavelet Transform (CWT) as well as the Fractal analysis with matlab 7.12 version
Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.
Peer-to-Peer streaming technology has become one of the major Internet applications as it offers the opportunity of broadcasting high quality video content to a large number of peers with low costs. It is widely accepted that with the efficient utilization of peers and server's upload capacities, peers can enjoy watching a high bit rate video with minimal end-to-end delay. In this paper, we present a practical scheduling algorithm that works in the challenging condition where no spare capacity is available, i.e., it maximally utilizes the resources and broadcasts the maximum streaming rate. Each peer contacts with only a small number of neighbours in the overlay network and autonomously subscribes to sub-streams according to a budget-model in such a way that the number of peers forwarding exactly one sub-stream will be maximized. The hop-count delay is also taken into account to construct a short depth trees. Finally, we show through simulation that peers dynamically converge to an efficient overlay structure with a short hop-count delay. Moreover, the proposed scheme gives nice features in the homogeneous case and overcomes SplitStream in all simulated scenarios.
PERFORMANCE ANALYSIS OF MULTI-PATH TCP NETWORKIJCNCJournal
MPTCP is proposed by IETF working group, it allows a single TCP stream to be split across multiple
paths. It has obvious benefits in performance and reliability. MPTCP has implemented in Linux-based
distributions that can be compiled and installed to be used for both real and experimental scenarios. In this
article, we provide performance analyses for MPTCP with a laptop connected to WiFi access point and 3G
cellular network at the same time. We prove experimentally that MPTCP outperforms regular TCP for
WiFi or 3G interfaces. We also compare four types of congestion control algorithms for MPTCP that are
also implemented in the Linux Kernel. Results show that Alias Linked Increase Congestion Control
algorithm outperforms the others in the normal traffic load while Balanced Linked Adaptation algorithm
outperforms the rest when the paths are shared with heavy traffic, which is not supported by MPTCP.
HIDING A MESSAGE IN MP3 USING LSB WITH 1, 2, 3 AND 4 BITSIJCNCJournal
Steganography is the art of hiding information in ways that prevent the detection of hidden messages. This
paper presentsa new method which randomly selects position in MP3 file to hide a text secret messageby
using Least Significant Bit (LSB) technique. The text secret message isused in start and ends locations a
unique signature or key.The methodology focuses to embed one bit, two bits, three bitsor four bits from
secret message into MP3 file by using LSB techniques. The evaluation and performancemethods are based
on robustness (BER and correlation), Imperceptibility (PSNR) and hiding capacity (Ratio between Sizes of
text message and MP3 Cover) indicators.The experimental results show the new method is more security.
Moreover the contribution of this paper is the provision of a robustness-based classification of LSB
steganography models depending on their occurrence in the embedding position.
PERFORMANCE ANALYSIS OF THE LINK-ADAPTIVE COOPERATIVE AMPLIFY-AND-FORWARD REL...IJCNCJournal
This paper analyzes the performance of cooperative amplify-and-forward (CAF) relay networks that
employ adaptive M-ary quadrature amplitude modulation (M-QAM)/M-ary phase shift keying (M-PSK)
digital modulation techniques in Nakagami-m fading channel. In particular, we present and compared the
analysis of CAF relay networks with different cooperative diversity and opportunistic routing strategies
such as regular Maximal Ratio Combining (MRC), Selection Diversity Combining (SDC), Opportunistic
Relay Selection with Maximal Ratio Combining (ORS-MRC) and Opportunistic Relay Selection with
Selection Diversity Combining (ORS-SDC). We advocate a simple yet unified numerical approach based on
the marginal moment generating function (MGF) of the total received SNR to compute the average symbol
error rate (ASER), mean achievable spectral efficiency, and outage probability performance metrics.
APPROXIMATING NASH EQUILIBRIUM UNIQUENESS OF POWER CONTROL IN PRACTICAL WSNSIJCNCJournal
Transmission power has a major impact on link and communication reliability and network lifetime in Wireless Sensor Networks. We study power control in a multi-hop Wireless Sensor Network where nodes' communication interfere with each other. Our objective is to determine each node's transmission power level that will reduce the communication interference and keep energy consumption to a minimum. We propose a potential game approach to obtain the unique equilibrium of the network transmission power allocation. The unique equilibrium is located in a continuous domain. However, radio transceivers accept only discrete values for transmission power level setting. We study the viability and performance of mapping the continuous solution from the potential game to the discrete domain required by the radio. We demonstrate the success of our approach through TOSSIM simulation when nodes use the Collection Tree Protocol for routing the data. Also, we show results of our method from the Indriya testbed. We compare it with the case where the motes use Collection Tree Protocol with the maximum transmission power.
In this paper, an application-based QoS evaluation approach for heterogeneous networks is proposed.It is possible to expand the network capacity and coverage in a dynamic fashion by applying heterogeneous wireless network architecture. However, the Quality of Service (QoS) evaluation of this type of network architecture is very challenging due to the presence of different communication technologies. Different communication technologies have different characteristics and the applications that utilize them have unique QoS requirements. Although, the communication technologies have different performance measurement parameters, the applications using these radio access networks have the same QoS requirements. As a result, it would be easier to evaluate the QoS of the access networks and the overall network configuration based on the performance of applications running on them. Using such applicationbased QoS evaluation approach, the heterogeneous nature of the underlying networks and the diversity of their traffic can be adequately taken into account. Through simulation studies, we show that the application performance based assessment approach facilitates better QoS management and monitoring of heterogeneous network configurations.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
Using spectral radius ratio for node degreeIJCNCJournal
In this paper, we show that the spectral radius ratio for node degree could be used to analyze the variation of node degree during the evolution of complex networks. We focus on three commonly studied models of complex networks: random networks, scale-free networks and small-world networks. The spectral radius ratio for node degree is defined as the ratio of the principal (largest) eigenvalue of the adjacency matrix of a network graph to that of the average node degree. During the evolution of each of the above three categories of networks (using the appropriate evolution model for each category), we observe the spectral radius ratio for node degree to exhibit high-very high positive correlation (0.75 or above) to that of the
coefficient of variation of node degree (ratio of the standard deviation of node degree and average node degree). We show that the spectral radius ratio for node degree could be used as the basis to tune the operating parameters of the evolution models for each of the three categories of complex networks as well as analyze the impact of specific operating parameters for each model.
These days considering expansion of networks, dissemination of information has become one of significant cases for researchers. In social networks in addition to social structures and people effectiveness on each other, Profit increase of sales, publishing a news or rumor, spread or diffusion of an idea can be mentioned. In social societies, people affect each other and with an individual’s membership, his friends
may join that group as well. In publishing a piece of news, independent of its nature there are different ways to expand it. Since information isn’t always suitable and positive, this article is trying to introduce the immunization mechanism against this information. The meaning of immunization is a kind of Slow Publishing of such information in network. Therefor it has been tried in this article to slow down the
publishing of information or even stop them. With comparison of presented methods for immunization and also presenting rate delay parameter, the immunization of methods were evaluated and we identified the most effective immunization method. Among existing methods for immunization and recommended methods, recommended methods also have an effective role in preventing spread of malicious rumor.
Advancements in Complementary Metal Oxide Semiconductor (CMOS) technology have enabled Wireless Sensor Networks (WSN) to gather, process and transport multimedia (MM) data as well and not just limited to handling ordinary scalar data anymore. This new generation of WSN type is called Wireless Multimedia Sensor Networks (WMSNs). Better and yet relatively cheaper sensors – sensors that are able to sense both scalar data and multimedia data with more advanced functionalities such as being able to handle rather intense computations easily - have sprung up. In this paper, the applications, architectures, challenges and issues faced in the design of WMSNs are explored. Security and privacy issues, over all requirements, proposed and implemented solutions so far, some of the successful achievements and other related works in the field are also highlighted. Open research areas are pointed out and a few solution suggestions to the still persistent problems are made, which, to the best of my knowledge, so far haven’t been explored yet.
Sensor nodes are highly mobile, which makes the application running on them face network related problems like node failure, link failure, network level disconnection, scarcity of resources etc. Node failure and Network fault are need to be monitored continuously by supervising the network status especially for critical applications like Health Monitoring System. We propose Node Monitoring protocol (NMP) to monitor the node good conditions using agents and ensure that node gets promised quality of service. These Nodes senses environment and communicates important data to the sink or base station. To establish the correct event time, these nodes need to be synchronized with global clock. Therefore, time synchronization is very important parameter. We have built a simulating environment for Validating Node Monitoring Protocol (NMP) to assess the reliability of Health Monitoring systems. Formal Specification and Description Language tool (SDL) has been used to validate the NMP at design time in order to increase the confidence and efficiency of the system.
PERFORMANCE EVALUATION OF WIRELESS SENSOR NETWORK UNDER HELLO FLOOD ATTACKIJCNCJournal
Wireless sensor network (WSN) is highly used in many fields. The network consists of tiny lightweight
sensor nodes and is largely used to scan or detect or monitor environments. Since these sensor nodes are
tiny and lightweight, they put some limitations on resources such as usage of power, processing given task,
radio frequency range. These limitations allow network vulnerable to many different types of attacks such
as hello flood attack, black hole, Sybil attack, sinkhole, and many more. Among these attacks, hello flood is
one of the most important attacks. In this paper,we have analyzed the performance of hello flood attack and
compared the network performance as number of attackers increases. Network performance is evaluated
by modifying the ad-hoc on demand distance vector (AODV) routing protocol by using NS2 simulator. It
has been tested under different scenarios like no attacker, single attacker, and multiple attackers to know
how the network performance changes. The simulation results show that as the number of attackers
increases the performance in terms of throughput and delay changes.
A predictive framework for cyber security analytics using attack graphsIJCNCJournal
Security metrics serve as a powerful tool for organizations to understand the effectiveness of protecting computer networks. However majority of these measurement techniques don’t adequately help corporations to make informed risk management decisions. In this paper we present a stochastic security framework for obtaining quantitative measures of security by taking into account the dynamic attributes associated with vulnerabilities that can change over time. Our model is novel as existing research in attack graph analysis do not consider the temporal aspects associated with the vulnerabilities, such as the availability of exploits and patches which can affect the overall network security based on how the vulnerabilities are interconnected and leveraged to compromise the system. In order to have a more realistic representation of how the security state of the network would vary over time, a nonhomogeneous model is developed which incorporates a time dependent covariate, namely the vulnerability age. The daily transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact measures evolve over a time period for a given network.
ADAPTIVE HANDOVER HYSTERESIS AND CALL ADMISSION CONTROL FOR MOBILE RELAY NODESIJCNCJournal
The aim of equipping a wireless network with a mobile relay node is to support broadband wireless communications for vehicular users and their devices. The high mobility of vehicular users, possibly at a very high velocity in the area in which two cells overlap, could cause the network to suffer from a reduced handover success rate and, hence, increased radio link failure. The combined impact of these problems is service interruptions to vehicular users. Thus, the handover schemes are crucial in solving these problems. In this work, we first present the adaptive handover hysteresis scheme for the wireless network with mobile relay nodes in the high-speed train scenario. Specifically, our proposed adaptive hysteresis scheme is based on the velocity of the train. Second, the handover call dropping probability is reduced by introducing a modified call admission control scheme to support radio resource reservation for handover calls that prioritizes handover calls of mobile relay over the other calls. The proposed solution in which adaptive parameter is combined with call admission control is evaluated by system level simulation. Our simulation results illustrate an increased handover success rate and reduced radio link failures.
PERFORMANCE ASSESSMENT OF CHAOTIC SEQUENCE DERIVED FROM BIFURCATION DEPENDENT...IJCNCJournal
In CDMA system, m-sequence and Gold codes are often utilized for spreading-despreading and
scrambling-descrambling operations. In a previous work, a design framework was created for generating
large family of codes from logistic map, which have comparable autocorrelation and cross correlation to
m-sequence and Gold codes. The purpose of this work is to evaluate the performance of these chaotic
codes in a CDMA environment. In the bit error rate (BER) simulation, matched filter, decorrelator and
MMSE receiver have been utilized. The received signal was modelled for synchronous CDMA uplink for
simulation simplicity purpose. Additive White Gaussian Noise channel model was assumed for the
simulation.
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...IJCNCJournal
Along with handling and poor storage capacity, each sensor in wireless sensor network (WSN) is equipped
with a limited power source and very difficult to be replaced in most application environments. Improving
the energy savings in applications for wireless sensor networks is necessary. In this paper, we mainly focus
on energy consumption savings in applications for wireless sensor networks time critical requirements. Our
Paper accompanying analysis of advanced technologies for energy saving techniques for the optimization
of energy efficiency together with the data transmission is optimal. Moreover, we propose improvements to
increase energy savings in applications for wireless sensor networks require time critical (LEACH
improvements). Simulation results show that our proposed protocol significantly better than LEACH about
the formation of clusters in each round, the average power, the number of nodes alive and average total
received data in base stations.
As Wireless Sensor Networks are penetrating into the industrial domain, many research opportunities are emerging. One such essential and challenging application is that of node localization. A feed-forward neural network based methodology is adopted in this paper. The Received Signal Strength Indicator (RSSI) values of the anchor node beacons are used. The number of anchor nodes and their configurations has an impact on the accuracy of the localization system, which is also addressed in this paper. Five different training algorithms are evaluated to find the training algorithm that gives the best result. The multi-layer Perceptron (MLP) neural network model was trained using Matlab. In order to evaluate the performance of the proposed method in real time, the model obtained was then implemented on the Arduino microcontroller. With four anchor nodes, an average 2D localization error of 0.2953 m has been achieved with a 12-12-2 neural network structure. The proposed method can also be implemented on any other embedded microcontroller system.
A NOVEL METHOD TO TEST DEPENDABLE COMPOSED SERVICE COMPONENTSIJCNCJournal
Assessing Web service systems performance and their dependability are crucial for the development of
today’s applications. Testing the performance and Fault Tolerance Mechanisms (FTMs) of composed
service components is hard to be measured at design time due to service instability is often caused by the
nature of the network conditions. Using a real internet environment for testing systems is difficult to set up
and control. We have introduced a fault injection toolkit that emulates a WAN within a LAN environment
between composed service components and offers full control over the emulated environment in addition to
the capability to inject network-related faults and application specific faults. The toolkit also generates
background workloads on the system under test so as to produce more realistic results. We describe an
experiment that has been performed to examine the impact of fault tolerance protocols deployed at a
service client by using our toolkit system.
Maximizing network interruption in wirelessIJCNCJournal
With the colossal growth of wireless sensor networks (WSNs) in different applications starting from home
automation to military affairs, the pressure on ensuring security in such a network is paramount.
Considering the security challenges, it is really a hard-hitting effort to develop a secured WSN system.
Moreover, as the information technology is getting popular, the intruders are also planning new ideas to
break the system security, to harm the network and to make the system quality down with the target of
taking the control of the network to corrupt it or to get benefits from it anyway. The intruders corrupt the
system only when the security breaking cost (SBC) is lower compared with the benefits they attained or the
harm it can make to others. In this paper, the authors define the term “maximizing network interruption
problem” and propose a technique, called the grid point approximation algorithm, to estimate the SBC of a
multi-hop WSN so that it can be made tougher for an intruder to break the system security. It is assumed
that the intruder has the complete picture of the entire network. The technique is designed from the
intruder’s point of view for completely jamming all the sensor nodes in the network through placing
jammers or malicious nodes strategically and at the same time keeping the number of jammer nodes to
minimum or near minimum. To the best of the authors’ knowledge, there is no work proposed so far of the
same kind. Experimental results with the changes of the different network parameters show that the
proposed algorithm is able to provide excellent performances to achieve the targets.
HIGHLY ACCURATE LOG SKEW NORMAL APPROXIMATION TO THE SUM OF CORRELATED LOGNOR...cscpconf
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
FITTING THE LOG SKEW NORMAL TO THE SUM OF INDEPENDENT LOGNORMALS DISTRIBUTIONcscpconf
Sums of lognormal random variables (RVs) occur in many important problems in wireless
communications especially in interferences calculation. Several methods have been proposed to
approximate the lognormal sum distribution. Most of them requires lengthy Monte Carlo
simulations, or advanced slowly converging numerical integrations for curve fitting and
parameters estimation. Recently, it has been shown that the log skew normal distribution can
offer a tight approximation to the lognormal sum distributed RVs. We propose a simple and
accurate method for fitting the log skew normal distribution to lognormal sum distribution. We
use moments and tails slope matching technique to find optimal log skew normal distribution
parameters. We compare our method with those in literature in terms of complexity and
accuracy. We conclude that our method has same accuracy than other methods but more
simple. To further validate our approach, we provide an example for outage probability
calculation in lognormal shadowing environment based on log skew normal approximation.
Sums of lognormal random variables (RVs) occur in many important problems in wireless
communications especially in interferences calculation. Several methods have been proposed to
approximate the lognormal sum distribution. Most of them requires lengthy Monte Carlo
simulations, or advanced slowly converging numerical integrations for curve fitting and
parameters estimation. Recently, it has been shown that the log skew normal distribution can
offer a tight approximation to the lognormal sum distributed RVs. We propose a simple and
accurate method for fitting the log skew normal distribution to lognormal sum distribution. We
use moments and tails slope matching technique to find optimal log skew normal distribution
parameters. We compare our method with those in literature in terms of complexity and
accuracy. We conclude that our method has same accuracy than other methods but more
simple. To further validate our approach, we provide an example for outage probability
calculation in lognormal shadowing environment based on log skew normal approximation.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two
possible and known rates, and one counter. Through a switch, the counter can observe
the sources individually or the counts can be combined so that the counter observes the
sum of the two. The sensor scheduling problem is to determine an optimal proportion of
the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between
the sources and the observed counts, and probability of detection for the associated
source detection problem. Our results, which are primarily computational, indicate
similar but not identical results under the two cost functions.
WSN, FND, LEACH, modified algorithm, data transmission ijwmn
In NLOS propagation conditions power of direct component can be attenuated significantly. Therefore
detection of direct component is aggravated which can degrades accuracy of Time of Arrival mobile
positioning. The goal of this paper is to determine possibilities to improve estimation of direct component
time delay by reducing detection threshold. Three different methods for calculating threshold has been
tested and compared in terms of positioning error.
Error bounds for wireless localization in NLOS environmentsIJECEIAES
An efficient and accurate method to evaluate the fundamental error bounds for wireless sensor localization is proposed. While there already exist efficient tools like Cram ` erRao lower bound (CRLB) and position error bound (PEB) to estimate error limits, in their standard formulation they all need an accurate knowledge of the statistic of the ranging error. This requirement, under Non-Line-of-Sight (NLOS) environments, is impossible to be met a priori. Therefore, it is shown that collecting a small number of samples from each link and applying them to a non-parametric estimator, like the gaussian kernel (GK), could lead to a quite accurate reconstruction of the error distribution. A proposed Edgeworth Expansion method is employed to reconstruct the error statistic in a much more efficient way with respect to the GK. It is shown that with this method, it is possible to get fundamental error bounds almost as accurate as the theoretical case, i.e. when a priori knowledge of the error distribution is available. Therein, a technique to determine fundamental error limits–CRLB and PEB–onsite without knowledge of the statistics of the ranging errors is proposed.
Error bounds for wireless localization in NLOS environmentsIJECEIAES
An efficient and accurate method to evaluate the fundamental error bounds for wireless sensor localization is proposed. While there already exist efficient tools like Cram ` erRao lower bound (CRLB) and position error bound (PEB) to estimate error limits, in their standard formulation they all need an accurate knowledge of the statistic of the ranging error. This requirement, under Non-Line-of-Sight (NLOS) environments, is impossible to be met a priori. Therefore, it is shown that collecting a small number of samples from each link and applying them to a non-parametric estimator, like the gaussian kernel (GK), could lead to a quite accurate reconstruction of the error distribution. A proposed Edgeworth Expansion method is employed to reconstruct the error statistic in a much more efficient way with respect to the GK. It is shown that with this method, it is possible to get fundamental error bounds almost as accurate as the theoretical case, i.e. when a priori knowledge of the error distribution is available. Therein, a technique to determine fundamental error limits–CRLB and PEB–onsite without knowledge of the statistics of the ranging errors is proposed.
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...IOSRJVSP
This research addresses the problem inter-symbol interference (ISI) using equalization techniques for time dispersive channels with additive white Gaussian noise (AWGN). The channel equalizer is modelled as a non-linear Multilayer Perceptron (MLP) structure. The Back Propagation (BP) algorithm is used to optimize the synaptic weights of the equalizer during the training mode. In the typical BP algorithm, the error signal is propagated from the output layer to the input layer while the learning rate parameter is held constant. In this study, the BP algorithm is modified so as to allow for the learning rate to be variable at each iteration and this achieves a faster convergence. The proposed algorithm is used to train the MLP based decision feedback equalizer (DFE) for time dispersive ISI channels. The equalizer is tested for a random input sequence of BPSK signals and its performance analysed in terms of the Bit Error Rates and speed of convergence. Simulation results show that the proposed algorithm improves the Bit Error Rate (BER) and rate of convergence.
Local Model Checking Algorithm Based on Mu-calculus with Partial OrdersTELKOMNIKA JOURNAL
The propositionalμ-calculus can be divided into two categories, global model checking algorithm
and local model checking algorithm. Both of them aim at reducing time complexity and space complexity
effectively. This paper analyzes the computing process of alternating fixpoint nested in detail and designs
an efficient local model checking algorithm based on the propositional μ-calculus by a group of partial
ordered relation, and its time complexity is O(d2(dn)d/2+2) (d is the depth of fixpoint nesting, n is the
maximum of number of nodes), space complexity is O(d(dn)d/2). As far as we know, up till now, the best
local model checking algorithm whose index of time complexity is d. In this paper, the index for time
complexity of this algorithm is reduced from d to d/2. It is more efficient than algorithms of previous
research.
Exact network reconstruction from consensus signals and one eigen valueIJCNCJournal
The basic inverse problem in spectral graph theory consists in determining the graph given its eigenvalue
spectrum. In this paper, we are interested in a network of technological agents whose graph is unknown,
communicating by means of a consensus protocol. Recently, the use of artificial noise added to consensus
signals has been proposed to reconstruct the unknown graph, although errors are possible. On the other
hand, some methodologies have been devised to estimate the eigenvalue spectrum, but noise could interfere
with the elaborations. We combine these two techniques in order to simplify calculations and avoid
topological reconstruction errors, using only one eigenvalue. Moreover, we use an high frequency noise to
reconstruct the network, thus it is easy to filter the control signals after the graph identification. Numerical
simulations of several topologies show an exact and robust reconstruction of the graphs.
In this paper, a new algorithm for a high resolution
Direction Of Arrival (DOA) estimation method for multiple
wideband signals is proposed. The proposed method proceeds
in two steps. In the first step, the received signals data is
decomposed in a Toeplitz form using the first-order statistics.
In the second step, The QR decomposition is applied on the
constructed Toeplitz matrix. Compared with existing schemes,
the proposed scheme provides several advantages. First, it
requires computing the triangular matrix R or the orthogonal
matrix Q to find the DOA; these matrices can be computed
with O(n2) operation. However, most of the existing schemes
required eignvalue decomposition (EVD) for the covariance
matrix or singular value decomposition (SVD) for the data
matrix; using EVD or SVD requires much more complex
computational O(n3) operation. Second, the proposed scheme
is more suitable for high-speed communication since it
requires first-order statistics and a single snapshot. Third,
the proposed scheme can estimate the correlated wideband
signals without using spatial smoothing techniques; whereas,
already-existing schemes do not. Accuracy of the proposed
wideband DOA estimation method is evaluated through
computer simulation in comparison with a conventional
method.
Performance of cognitive radio networks with maximal ratio combining over cor...Polytechnique Montreal
In this paper, we apply the maximal ratio combining (MRC) technique to achieve higher detection probability in cognitive radio networks over correlated Rayleigh fading channels. We present a simple approach to derive the probability of detection in closed-form expression. The numerical results reveal that the detection performance is a monotonically increasing function with respect to the number of antennas. Moreover, we provide sets of complementary receiver operating characteristic (ROC) curves to illustrate the effect of antenna correlation on the sensing performance of cognitive radio networks employing MRC schemes in some respective scenarios.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
METHOD FOR THE DETECTION OF MIXED QPSK SIGNALS BASED ON THE CALCULATION OF FO...sipij
In this paper we propose the method for the detection of Carrier-in-Carrier signals using QPSK modulations. The method is based on the calculation of fourth-order cumulants. In accordance with the methodology based on the Receiver Operating Characteristic (ROC) curve, a threshold value for the decision rule is established. It was found that the proposed method provides the correct detection of the sum of QPSK signals for a wide range of signal-to-noise ratios and also for the different bandwidths of mixed signals. The obtained results indicate the high efficiency of the proposed detection method. The advantage of the proposed detection method over the “radiuses” method is also shown.
Fault tolerant routing scheme based onIJCNCJournal
Most routing protocols designed for wireless sensor networks used the unit disk graph model (UGD) to
represent the physical layer. This model does not take into account fluctuations of the radio signal. Therefore, these protocols must be improved to be adapted to a non-ideal environment. In this paper, we used the lognormal shadowing (LNS) model to represent a non-ideal environment. In this model, the probability of successful reception is calculated according to the link quality. We evaluated LEACH’s performance with LNS model to illustrate the effects of radio signal. Unfortunately, our findings showed that the fluctuations of signal radio have a significant impact on protocol performance. Thereby, we
proposed a Fault-Tolerant LEACH-based Routing scheme (FTLR scheme) to improve the performance of LEACH in a non-ideal environment. Simulation results proved that our contribution provides good performance over the ideal model in terms packet loss rate and energy consumption.
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
Similar to On the approximation of the sum of lognormals by a log skew normal distribution (20)
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF based Intrusion Detection System for Big Data IoT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
** Connect, Collaborate, And Innovate: IJCNC - Where Networking Futures Take ...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
On the approximation of the sum of lognormals by a log skew normal distribution
1. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
DOI : 10.5121/ijcnc.2015.7110 135
ON THE APPROXIMATION OF THE SUM OF
LOGNORMALS BY A LOG SKEW NORMAL
DISTRIBUTION
Marwane Ben Hcine1
and Ridha Bouallegue2
¹ ²Innovation of Communicant and Cooperative Mobiles Laboratory, INNOV’COM
Sup’Com, Higher School of Communication
Univesity of Carthage
Ariana, Tunisia
ABSTRACT
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of
each method relies highly on the region of the resulting distribution being examined, and the individual
lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed
accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of
Lognormals based on log skew normal approximation. The main contribution on this work is to propose an
analytical method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any
correlation coefficient. Simulation results show that our method outperforms all previously proposed
methods and provides an accuracy within 0.01 dB for all cases.
KEYWORDS
Lognormal sum, log skew normal, moment matching, asymptotic approximation, outage
probability, shadowing environment.
1.INTRODUCTION
Multipath with lognormal statistics is important in many areas of communication systems. With
the emergence of new technologies (3G, LTE, WiMAX, Cognitive Radio), accurate interference
computation becomes more and more crucial for outage probabilities prediction, interference
mitigation techniques evaluation and frequency reuse scheme selection. For a given practical
case, Signal-to-Interference-plus-Noise (SINR) Ratio prediction relies on the approximation of
the sum of correlated lognormal RVs. Looking in the literature; several methods have been
proposed in order to approximate the sum of correlated lognormal RVs. Since numerical methods
require a time-consuming numerical integration, which is not adequate for practical cases, we
consider only analytical approximation methods. Ref [1] gives an extension of the widely used
iterative method known as Schwartz and Yeh (SY) method [2]. Some others resources uses an
extended version of Fenton and Wilkinson methods [3-4]. These methods are based on the fact
that the sum of dependent lognormal distribution can be approximated by another lognormal
distribution. The non-validity of this assumption at distribution tails, as we will show later, is the
main raison for its fail to provide a consistent approximation to the sum of correlated lognormal
distributions over the whole range of dB spreads. Furthermore, the accuracy of each method
depends highly on the region of the resulting distribution being examined. For example, Schwartz
and Yeh (SY) based methods provide acceptable accuracy in low-precision region of the
2. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
136
Cumulative Distribution Function (CDF) (i.e., 0.01–0.99) and the Fenton–Wilkinson (FW)
method offers high accuracy in the high-value region of the CDF (i.e., 0.9–0.9999). Both
methods break down for high values of standard deviations. Ref [5] propose an alternative
method based on Log Shifted Gamma (LSG) approximation to the sum of dependent lognormal
RVs. LSG parameters estimation is based on moments computation using Schwartz and Yeh
method. Although, LSG exhibits an acceptable accuracy, it does not provide good accuracy at the
lower region.
In this paper, we propose a very highly accurate yet simple method to approximate the sum of
lognormal RVs based on Log Skew Normal distribution (LSN). LSN approximation has been
proposed in [6] as a highly accurate approximation method for the sum of independent lognormal
distributions, Furthermore a modified LSN approximation method is proposed in [7]. However,
LSN parameters estimation relies on a time-consuming Monte Carlo simulation and the proposed
approach is limited to the independent case. The main contribution on this work is to provide a
simple analytical method for LSN parameters estimation without the need for a time-consuming
Monte Carlo simulation or curve fitting approximation and extend previous approaches to the
correlated case. Our analytical fitting method is based on moments and tails slope matching for
both distributions. This work can be seen as extension to the correlated case for our work done in
[8].
The rest of the paper is organized as follows: In section 2, a brief description of the lognormal
distribution and sum of correlated lognormal distributions is given. In section 3, we introduce the
Log Skew Normal distribution and its parameters. The validity of lognormal assumption for the
sum of lognormal RVs at distribution tails is discussed in section 4. In section 5, we use
moments and tails slope matching method to estimate LSN distribution parameters. In section 6,
we provide comparisons with well-known approximation methods (i.e. Schwartz and Yeh,
Fenton–Wilkinson, LSG) based on simulation results. In section 7, we give an example for outage
probability calculation in lognormal shadowing environment based on our method.
The conclusion remarks are given in Section 8.
2. SUM OF CORRELATED LOGNORMAL RVS
Given X, a Gaussian RV with mean X and variance 2
X , then X
L e is a lognormal RV with a
Probability Density Function (PDF):
2
2
1 1
exp( ln(l) ) 0
2(l, , ) 2
0
X
XL X X X
l
f l
otherwise
(1)
Usually X represents power variation measured in dB. Considering XdB
with mean dB and
variance 2
dB , the corresponding lognormal RV 10
10
dBX
L has the following pdf:
2
2
1 1
exp( 10log(l) ) 0
22(l, , )
0
dB
dBdBL dB dB
l
lf
otherwise
(2)
3. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
137
where X
dB
, X
dB
and ln(10)
10
The first two central moments of L may be written as:
2
2 2
/2
2 2
( 1)
m e e
D e e e
(3)
Correlated Lognormals sum distribution corresponds to the sum of dependent lognormal RVs, i.e.
1 1
n
N N
X
n
i i
L e
(4)
We define 1 2( , ... )NL L L L as a strictly positive random vector such that the vector 1 2( , ... )NX X X X
with log( )j jX L , 1 j N has an n-dimensional normal distribution with mean vector
1 2( , ... )N and covariance matrix M with (i, j) ( , )i jM Cov X X , 1 ,1i N j N . L is called
an n-dimensional log-normal vector with parameters andM .
( , )i jCov L L may be expressed as [9, eq. 44.35]:
2 21
( )
( , )2
( , ) (e 1)
i j i j M i j
i jCov L L e
(5)
The first two central moments of may be written as:
2
/2
1 1
m i i
N N
i
i i
m e e
(6)
2 21
( )
2 ( , )2
1, 1 1, 1
( , ) (e 1)
i j i j
N N
M i j
i j
i j i j
D Cov L L e
(7)
3. LOG SKEW NORMAL DISTRIBUTION
The standard skew normal distribution was firstly introduced in [10] and was independently
proposed and systematically investigated by Azzalini [11]. The random variable X is said to
have a scalar ( , , )SN distribution if its density is given by:
2
(x; , , ) ( ) ( )X
x x
f
(8)
Where
2
/2
e
(x)
2
x
, (x) ( ) d
x
4. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
138
With is the shape parameter which determines the skewness, and represent the usual
location and scale parameters and , denote, respectively, the pdf and the cdf of a standard
Gaussian RV.
The CDF of the skew normal distribution can be easily derived as:
(x; , , ) ( ) 2T( , )X
x x
F
(9)
Where function T(x, ) is Owen’s T function expressed as:
2 2
2
0
1
exp x (1 t )
1 2
T(x, )
2 (1 t )
dt
(10)
A fast and accurate calculation of Owen’s T function is provided in [12].
Similar to the relation between normal and lognormal distributions, given a skew normal RV X
then 10
10
dBX
L is a log skew normal distribution. The cdf and pdf of L can be easily derived as:
10log(l) 10log(l)2
( ) ( ) 0
(l; , , )
0
dB dB
db db dbL dB db
l
lf
otherwise
(11)
10log(l) 10log(l)
( ) 2T( , ) 0
(l; , , )
0
dB dB
db dbL dB db
l
F
otherwise
(12)
Where dB
, dB
and ln(10)
10
The Moment Generating Function (MGF) of the skew normal distribution may be written as [11]:
2
/2
2
M (t) E
2 ( t),
1
tX
X
t
e
e
(13)
Thus the first two central moments of L are:
2
/ 2
2 ( )e e
(14)
2 2
2 2 2
2 ( (2 ) 2 ( ))e e e
(15)
5. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
139
4. VALIDITY OF LOGNORMAL ASSUMPTION FOR THE SUM OF LOGNORMAL
RVS AT DISTRIBUTION TAILS
Several approximation methods for the sum of correlated lognormal RVs is based on the fact that
this sum can be approximated, at least as a first order, by another lognormal distribution. On the
one hand, Szyszkowicz and Yanikomeroglu [14] have published a limit theorem that states that
the distribution of a sum of identically distributed equally and positively correlated lognormal
RVs converges, in distribution, to a lognormal distribution as N becomes large. This limit
theorem is extended in [15] to the sum of correlated lognormal RVs having a particular
correlation structure.
On the other hand, some recent results [16, Theorem 1. and 3.] show that the sum of lognormal
RVs exhibits a different behaviour at the upper and lower tails even in the case of identically
distributed lognormal RVs. Although the lognormal distribution have a symmetric behaviours in
both tails, this is not in contradiction with results proven in [14-15] since convergence is proved
in distribution, i.e., convergence at every point x not in the limit behaviour.
This explain why some lognormal based methods provide a good accuracy only in the lower tail
(e.g. Schwartz and Yeh), where some other methods provide an acceptable accuracy in the upper
tail (e.g. Fenton-Wilkinson). This asymmetry of the behaviours of the sum of lognormal RVs at
the lower and upper tail motivates us to use the Log Skew Normal distribution as it represents the
asymmetric version of lognormal distribution. So, we expect that LSN approximation provide the
needed accuracy over the whole region including both tails of the sum of correlated lognormal
RVs distribution.
5. LOG SKEW NORMAL PARAMETERS DERIVATION
5.1. Tails properties of sum of Correlated lognormal RVs
Let L be an N-dimensional log-normal vector with parameters andM . Let 1
B M
the inverse
of the covariance matrix.
To study tails behaviour of sum of correlated lognormal RVs, it is convenient to work on
lognormal probability scale [13], i.e., under the transformation G:
1
: (x) (x) ( ( (e ))x
G F F F F
(16)
We note that under this transformation, the lognormal distribution is mapped onto a linear
equation.
We define iB as row sum of B :
1
( , )
N
i
k
B B i k
, 1 i N (17)
Let:
{ 1,..N; 0}iN Card i B
6. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
140
{ 1,.. ; 0} { (1), (1)... ( )}iI i N B k k k N
We define , ,M B and iB such that:
( ) ( ( ))i k i
( , ) ( ( ), ( ))M i j M k i k j
1
B M
1
( , )
N
i
k
B B i k
Since variables iL are exchangeable, we can assume for the covariance matrix B, with no loss of
generality, that {1,2,3... }I N with N N .
Let N
w such that:
1
1
1
1 1
B
w
B
(18)
So that, we may write:
1
1,...i
i N
j
j
A
w j N
A
(19)
Now, we set N
w as
0
i
i
w if i N
w
Otherwise
(20)
Assuming that for every {1,2,3... }i N I
( ) 0i
e w Bw
(21)
Where i N
e satisfies 1i
je if i j and 0i
je otherwise.
In [16, Theorem 3.], Gulisashvili and Tankov proved that the slope of the right tail of the SLN cdf
on lognormal probability scale is equal to1/ Max{ ( , )}
i
B i i when assumption (20) is valid.
1
lim ( )
Max{ ( , )}
SCLN
x
i
F x
x B i i
(22)
Considering the left tail slope, they proved that the slope of the left tail of the SLN cdf on
lognormal probability scale is equal to
1
N
i
i
B
[16, Theorem 1.].
7. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
141
1
lim ( )
N
SCLN i
x
i
F x B
x
(23)
In general, we can assume that 0iB for 1 i N , so that N N and tails slope can be
expressed as:
1
lim ( )
max{ ( , )}
SCLN
x
i
F x
x B i i
(24)
1
lim ( )
N
SCLN i
x
i
F x B
x
(25)
5.2. Tail properties of Skew Log Normal
In [17], it has been showed that the rate of decay of the right tail probability of a skew normal
distribution is equal to that of a normal variate, while the left tail probability decays to zero faster.
This result has been confirmed in [18]. Based on that, it is easy to show that the rate of decay of
the right tail probability of a log skew normal distribution is equal to that of a lognormal variate.
Under the transformation G, skew lognormal distribution has a linear asymptote in the upper limit
with slope
1
lim ( )LSN
x
F x
x w
(26)
In the lower limit, it has no linear asymptote, but does have a limiting slope
2
1
lim ( )LSN
x
F x
x w
(27)
These results are proved in [8, Appendix A]. Therefore, it will be possible to match the tail slopes
of the LSN with those of the sum of correlated lognormal RVs distribution in order to find LSN
optimal parameters.
5.3. Moments and lower tail slope matching
In order to derive log skew normal optimal parameters, we proceed by matching the two central
moments of both distributions. Furthermore, use we lower slope tail match. By simulation, we
point out that upper slope tail match is valid only for the sum of high number of lognormal RVs.
However we still need it to find an optimal starting guess solution to the generated nonlinear
equation. Thus we define opt as solution the following nonlinear equation:
8. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
142
2
2 2
1
2
1
2
11
2/2 2
1
1
(2 )
( 1)
1
2 ( )( )
Ni i i
i
i
i i
NN
iB
ii
N
N
i
i
i
Be e e
e
e e
B
(28)
Such nonlinear equation can be solved using different mathematical utility (e.g. fsolve in Matlab).
Using upper slope tail match we derive a starting solution guess 0 to (23) in order to converge
rapidly (only few iterations are needed):
2
0
1
Max{ ( , )} ) 1
N
i
i
i
B i i B
(29)
Optimal location and scale parameters opt , opt are obtained according to opt as.
2
2
1
2
/2
1
1
1
ln( ) ln( ( ))
2
i i
opt
opt N
i
i
N
opt opt
opt N
i
i
i
B
e e
B
(30)
6. SIMULATION RESULTS
In this section, we propose to validate our approximation method and compare it with other
widely used approximation methods. Fig. 1 and Fig. 2 show the results for the cases of the sum of
20 independent lognormal RVs ( 0 ) with mean 0dB and standard deviation 3dB and 6dB. The
CDFs are plotted in lognormal probability scale [13]. Simulation results show that the accuracy of
our approximation get better as the number of lognormal distributions increase.
10 11 12 13 14 15 16 17 18
0.0001
0.05
0.1
0.25
0.5
0.75
0.9
0.95
0.99
0.999
0.9999
X(dB)
CDF
CDF of Sum of 20 lognormal RVs (=0dB,=3dB)
Simulation
Skew Log Normal
Fenton-Wilkinson
MPLN
9. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
143
Figure 1. CDF of a sum of 20 i.i.d. lognormal RVs with = 0 dB and = 3dB.
We can see that LSN approximation offers accuracy over the entire body of the distribution for
both cases. In Fig. 3, we consider the sum of 12 independent lognormal RVs having the same
standard deviation of 3dB, but with different means. It is clear that LSN approximation catch the
entire body of SLN distribution. In this case, both LSN and MPLN provide a tight approximation
to SLN distribution. However LSN approximation outperforms MLPN approximation in lower
region. Since interferences modelling belongs to this case (i.e. same standard deviation with
different means), it is important to point out that log skew normal distribution outperforms other
methods in this case.
Fig. 4 shows the case of the sum of 6 independent lognormal RVs having the same mean 0dB but
with different standard deviations. We can see that Fenton-Wilkinson approximation method can
only fit a part of the entire sum of distribution, while the MPLN offers accuracy on the left part of
the SLN distribution. However, it is obvious that LSN method provides a tight approximation to
the SLN distribution except a small part of the right tail. It is worthy to note that in all cases, log
skew normal distribution provide a very tight approximation to SLN distribution in the region of
CDF less than 0.999.
5 10 15 20 25 30 35
0.0001
0.05
0.1
0.25
0.5
0.75
0.9
0.95
0.99
0.999
0.9999
X(dB)
CDF
CDF of Sum of 20 lognormal RVs (=0dB,=6dB)
Simulation
Skew Log Normal
Fenton-Wilkinson
MPLN
Figure 2. CDF of a sum of 20 i.i.d. lognormal RVs with = 0 dB and = 6dB.
10. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
144
5 10 15 20 25 30 35 40 45
0.0001
0.05
0.1
0.25
0.5
0.75
0.9
0.95
0.99
0.999
0.9999
X(dB)
CDF
CDF of Sum of 12 lognormal RVs (=[-12 -10 .. 10 12]dB,=6dB)
Simulation
Skew Log Normal
Fenton-Wilkinson
MPLN
Figure 3. CDF of a sum of 12 lognormal RVs with = [-12 -10 -8 ... 8 10 12] dB and
= 6dB.
The comparison of the Complementary Cumulative Distribution Function (CDF) of the sum of N
( 2,8,20N ) correlated lognormal RVs P of Monte Carlo simulations with LSN
approximation and lognormal based methods for low value of standard deviation 3dB with
0dB and 0.7 are shown in Fig.5. Although these methods are known by its accuracy in the
lower range of standard deviation, it obvious that LSN approximation outperforms them. We note
that fluctuation at the tail of sum of lognormal RVs distribution is due to Monte Carlo simulation,
since we consider 7
10 samples at every turn. We can see that LSN approximation results are
identical to Monte Carlo simulation results.
0 5 10 15 20 25 30
0.0001
0.05
0.1
0.25
0.5
0.75
0.9
0.95
0.99
0.999
0.9999
X(dB)
CDF
CDF of Sum of 6 lognormal RVs (=0dB,=[1 2 3 4 5 6]dB)
Simulation
Skew Log Normal
Fenton-Wilkinson
MPLN
Figure 4. CDF of a sum of 6 lognormal RVs with = 0 dB and = [1 2 3 4 5 6] dB.
11. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
145
5 10 15 20 25 30 35
10
-16
10
-14
10
-12
10
-10
10
-8
10
-6
10
-4
10
-2
10
0
Complementary CDF of the Sum of N correlated Lognormal RVs with =0,=3dB =0.7
(dB)
P(>)
Wilkinson
Ext. Sch. and Yeh
Log Skew Normal
Simulation
N=2
N=20
N=8
Figure 5. Complementary CDF of the sum of N correlated lognormal RVs with 0dB , 3dB and
0.7
Fig.6 and Fig.7 show the complementary CDF of the sum of N correlated lognormal RVs for
higher values of standard deviation 6, 9dB and two values of correlation coefficients
0.9, 0.3 . We consider the Log Shifted Gamma approximation for comparison purposes. We
consider the Log Shifted Gamma approximation for comparison purposes. We can see that LSN
approximation highly outperforms other methods especially at the CDF right tail ( 2
1 10CDF
).
Furthermore, LSN approximation give exact Monte Carlo simulation results even for low range
of the complementary CDF of the sum of correlated lognormal RVs ( 6
1 10CDF
).
To further verify the accuracy of our method in the left tail as well as the right tail of CDF of the
sum of correlated lognormal RVs, Fig. 8 and Fig. 9 show respectively the CDF P of the
sum of 6 correlated lognormal RVs with 0dB 0.7
10 15 20 25 30 35 40 45
10
-14
10
-12
10
-10
10
-8
10
-6
10
-4
10
-2
10
0
Complementary CDF of the Sum of N correlated Lognormal RVs with =0,=6dB =0.9
(dB)
P(>)
Wilkinson
Extended Sch. and Yeh
Log Shifted Gamma
Log Skew Normal
Simulation
N=8
N=20
Figure 6. Complementary CDF of the sum of N correlated lognormal RVs with 0dB , 6dB and
0.9
12. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
146
10 20 30 40 50 60 70
10
-16
10
-14
10
-12
10
-10
10
-8
10
-6
10
-4
10
-2
10
0
Complementary CDF of the Sum of N correlated Lognormal RVs with =0,=9dB =0.3
(dB)
P(>)
Ext. Sch. and Yeh
Log Shifted Gamma
Log Skew Normal
Simulation
N=10
N=30
N=60
Figure 7. Complementary CDF of the sum of N correlated lognormal RVs with
0dB 9 dB 0.3
and the complementary CDF of the sum of 20 correlated lognormal RVs with 0dB 0.3 for
different standard deviation values. It is obvious that LSN approximation provide exact Monte
Carlo simulation results at the left tail as well as the right tail. We notice that accuracy does not
depend on standard deviation values as LSN performs well for the highest values of standard
deviation of the shadowing.
To point out the effect of correlation coefficient value on the accuracy of the proposed
approximation, we consider the CDF of the sum of 12 correlated lognormal RVs for different
combinations of standard deviation and correlation coefficient values with 0dB (Fig. 10). One
can see that LSN approximation efficiency does not depend on correlation coefficient or standard
deviation values. So that, LSN approximation provides same results as Monte Carlo simulations
for all cases.
-70 -60 -50 -40 -30 -20 -10 0 10
10
-20
10
-15
10
-10
10
-5
10
0
CDF of the Sum of 6 correlated Lognormal RVs with =0, =0.7
(dB)
P(<)
Log Skew Normal
Simulation
=6dB =3dB=9dB=12dB
=15dB
=20dB
Figure 8. CDF of the sum of 6 correlated lognormal RVs with 0dB , 0.7
13. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
147
10 20 30 40 50 60 70 80 90 100 110
10
-16
10
-14
10
-12
10
-10
10
-8
10
-6
10
-4
10
-2
10
0
Complementary CDF of the Sum of 20 correlated Lognormal RVs with =0, =0.3
(dB)
P(>)
Log Skew Normal
Simulation
=3dB =6dB =9dB
=12dB =15dB
=20dB
Figure 9. Complementary CDF of the sum of 20 correlated lognormal RVs with 0dB , 0.3
7. APPLICATION: OUTAGE PROBABILITY IN LOGNORMAL SHADOWING
ENVIRONMENT
In this section, we provide an example for outage probability calculation in lognormal shadowing
environment based on log skew normal approximation.
We consider a homogeneous hexagonal network made of 18 rings around a central cell. Fig. 11
shows an example of such a network with the main parameters involved in the study: R, the cell
range (1 km), Rc, the half-distance between BS. We focus on a mobile station (MS) u and its
serving base station (BS), iBS , surrounded by M interfering BS
To evaluate the outage probability, the noise is usually ignored due to simplicity and negligible
amount. Only inter-cell interferences are considered. Assuming that all BS have identical
transmitting powers, the SINR at the u can be written in the following way:
-60 -50 -40 -30 -20 -10 0 10
10
-20
10
-15
10
-10
10
-5
10
0
CDF of the Sum of 12 correlated Lognormal RVs with =0
(dB)
P(<)
Log Skew Normal
Simulation
=0.8,=6dB
=0.4,=15dB
=0.1,=6dB
=0.4,=6dB
=0.1,=10dB
=0.4,=10dB
=0.8,=10dB
=0.1,=15dB
=0.8,=15dB
Figure 10. CDF of the sum of 12 correlated lognormal RVs with different correlation coefficients ,
0dB
14. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
148
, , ,
, , ,
0, 0,
.Y .Y
.Y .Y
i i u i u i i u
M M
j j u j u j j u
j j i j j i
PKr rS
SINR
I N
P Kr N r
(31)
The path-loss model is characterized by parameters K and >2. The term l lPKr
is the mean
value of the received power at distance lr from the transmitter lBS .Shadowing effect is
represented by lognormal random variable
l,
10
, 10
ux
l uY where l,ux is a normal RV, with zero mean
and standard deviation σ, typically ranging from 3 to 12 dB.
Figure 11. Hexagonal network and main parameters
The outage probability is now defined as the probability for the SINR to be lower than a
threshold value :
,int
,
0,
,
,
0,
int,dB ,
.Y
( ) ( ) ( )
.Y
.Y
(10log( ) )
.Y
( )
i i u
M
ext
j j u
j j i
i i u
dBM
j j u
j j i
ext dB dB
rP
P P P
P
r
r
P
r
P P P
(32)
Where:
int,dB ,ln( .Y )i i uP r
a normal RV with mean ln( )im r
and standard deviation .
ext,dB ,
0,
ln( .Y )
M
j j u
j j i
P r
a skew normal RV with distribution ( , , )SN .
15. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
149
It is easy to show that the difference int,dB ,ext dBP P has a skew normal distribution 1 1 1( , , )SN
where:
1
2 2 2
1
1 2 2
2
2
m e
(1 )( ) 1
(33)
Fig. 12 and Fig. 13 show the outage probability at cell edge (r=Rc) and inside the cell (r=Rc/2),
resp. for =3dB and 6dB assuming =3. Difference between analysis and simulation results is less
than few tenths of dB. This accuracy confirms that the LSN approximation, considered in this
work, is efficient for interference calculation process.
-20 -15 -10 -5 0 5 10 15 20
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Outage probability for a mobile located at r=Rc(a),Rc/2(b), =3dB
[dB]
Outageprobability
Simulation
LSKN Approximation
(b)
(a)
Figure 12. Outage probability for a mobile located at r=Rc (a), r=Rc /2(b), =3dB, =3
-20 -15 -10 -5 0 5 10 15 20
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Outage probability for a mobile located at r=Rc(a),Rc/2(b), =6dB
[dB]
Outageprobability
Simulation
LSKN Approximation
(b)(a)
Figure 13. Outage probability for a mobile located at r=Rc (a), r=Rc /2(b), =6dB, =3
8. CONCLUSIONS
16. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
150
In this paper, we proposed to use the Log Skew Normal distribution in order to approximate the
sum of correlated lognormal RV distribution. Our fitting method uses moment and tails slope
matching technique to derive LSN distribution parameters. LSN provides identical results to
Monte Carlo simulations results and then outperforms other methods for all cases. Using an
example for outage probability calculation in lognormal shadowing environment, we proved that
LSN approximation, considered in this work, is efficient for interference calculation process.
REFERENCES
[1] A. Safak, “Statistical analysis of the power sum of multiple correlated log-normal components”,IEEE
Trans. Veh. Tech., vol. 42, pp. 58–61, Feb. 1993.
[2] S.Schwartz and Y.S. Yeh, “On the distribution function and moments of power sums with log-normal
components”, Bell System Tech. J., Vol. 61, pp. 1441–1462, Sept. 1982.
[3] M. Pratesi , F. Santucci , F. Graziosi and M. Ruggieri "Outage analysis in mobile radio systems with
generically correlated lognormal interferers", IEEE Trans. Commun., vol. 48, no. 3, pp.381 -385
2000.
[4] A. Safak and M. Safak "Moments of the sum of correlated log-normal random variables", Proc. IEEE
44th Vehicular Technology Conf., vol. 1, pp.140 -144 1994.
[5] C. L. Joshua Lam,Tho Le-Ngoc " Outage Probability with Correlated Lognormal Interferers using
Log Shifted Gamma Approximation” , Wireless Personal Communications, Volume 41, Issue 2,
pp179-192, April 2007.
[6] Z. Wu, X. Li, R. Husnay, V. Chakravarthy, B. Wang, and Z. Wu. A novel highly accurate log skew
normal approximation method to lognormal sum distributions. In Proc. of IEEE WCNC 2009.
[7] X. Li, Z. Wu, V. D. Chakravarthy, and Z. Wu, "A low complexity approximation to lognormal sum
distributions via transformed log skew normal distribution," IEEE Transactions on Vehicular
Technology, vol. 60, pp. 4040-4045, Oct. 2011.
[8] M. Benhcine, R. Bouallegue, “Fitting the Log Skew Normal to the Sum of Independent Lognormals
Distribution”, accepted in the sixth International Conference on Networks and Communications
(NetCom-2014) .
[9] Samuel Kotz, N. Balakrishnan, Norman L. Johnson "Continuous Multivariate Distributions", Volume
1, Second Edition, John Wiley & Sons, avril 2004.
[10] O’Hagan A. and Leonard TBayes estimation subject to uncertainty about parameter constraints,
Biometrika, 63, 201–202, 1976.
[11] Azzalini A, A class of distributions which includes the normal ones, Scand. J. Statist., 12, 171–178,
1985.
[12] M. Patefield, “Fast and accurate calculation of Owen’s t function,” J. Statist. Softw., vol. 5, no. 5, pp.
1–25, 2000.
[13] N. C. Beaulieu, Q. Xie, “Minimax Approximation to Lognormal Sum Distributions”, IEEE VTC
Spring, Vol. 2, pp. 1061-1065, April 2003.
[14] S. S. Szyszkowicz and H. Yanikomeroglu "Limit theorem on the sum of identically distributed
equally and positively correlated joint lognormals", IEEE Trans. Commun., vol. 57, no. 12, pp.3538 -
3542 2009
[15] N. C. Beaulieu, "An extended limit theorem for correlated lognormal sums," IEEE Trans. Commun.,
vol. 60, no. 1, pp. 23-26, Jan. 2012
[16] Archil Gulisashvili, Peter Tankov, “Tail behavior of sums and differences of log-normal random
variables”, ARXIV 09/2013.
[17] Antonella Capitanio, “On the approximation of the tail probability of the scalar skew-normal
distribution”, in METRON (2010).
[18] W. Hürlimann, “Tail Approximation of the Skew-Normal by the Skew-Normal Laplace: Application
to Owen’s T Function and the Bivariate Normal Distribution”, Journal of Statistical and Econometric
Methods, vol. 2, no.1, 2013, 1-12, Scienpress Ltd, 2013.
17. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
151
Authors
Marwane Ben Hcine was born in K.bili, Tunisia, on January 02, 1985. He graduated in
Telecommunications Engineering, from The Tunisian Polytechnic School (TPS), July 2008.
In June 2010, he received the master’s degree of research in communication systems of the
Higher School of Communication of Tunisia (Sup’Com). Currently he is a Ph.D. student at
the Higher School of Communication of Tunisia. His research interests are network design and
dimensioning for LTE and beyond Technologies.
Pr. Ridha BOUALLEGUE was born in Tunis, Tunisia. He received the M.S degree in
Telecommunications in 1990, the Ph.D. degree in telecommunications in 1994, and the
HDR degree in Telecommunications in 2003, all from National School of engineering of
Tunis (ENIT), Tunisia. Director and founder of National Engineering School of Sousse in
2005. Director of the School of Technology and Computer Science in 2010. Currently, Prof. Ridha
Bouallegue is the director of Innovation of COMmunicant and COoperative Mobiles Laboratory,
INNOV’COM Sup’COM, Higher School of Communication. His current research interests include mobile
and cooperative communications, Access technique, intelligent signal processing, CDMA, MIMO, OFDM
and UWB systems.