This paper describes a novel audio coding algorithm that is a building block in the recently standardized 3GPP EVS codec. The presented scheme operates in the Modified Discrete Cosine Transform (MDCT) domain and deploys a Split-PVQ pulse coding quantizer, a noise-fill, and a gain control optimized for the quantizer’s properties. A complexity analysis in terms of WMOPS is presented to illustrate that the proposed Split-PVQ concept and dynamic range optimized MPVQ-indexing are suitable for real-time audio coding.
Improving The Performance of Viterbi Decoder using Window System IJECEIAES
An efficient Viterbi decoder is introduced in this paper; it is called Viterbi decoder with window system. The simulation results, over Gaussian channels, are performed from rate 1/2, 1/3 and 2/3 joined to TCM encoder with memory in order of 2, 3. These results show that the proposed scheme outperforms the classical Viterbi by a gain of 1 dB. On the other hand, we propose a function called RSCPOLY2TRELLIS, for recursive systematic convolutional (RSC) encoder which creates the trellis structure of a recursive systematic convolutional encoder from the matrix “H”. Moreover, we present a comparison between the decoding algorithms of the TCM encoder like Viterbi soft and hard, and the variants of the MAP decoder known as BCJR or forward-backward algorithm which is very performant in decoding TCM, but depends on the size of the code, the memory, and the CPU requirements of the application.
Fixed Point Realization of Iterative LR-Aided Soft MIMO Decoding AlgorithmCSCJournals
Multiple-input multiple-output (MIMO) systems have been widely acclaimed in order to provide high data rates. Recently Lattice Reduction (LR) aided detectors have been proposed to achieve near Maximum Likelihood (ML) performance with low complexity. In this paper, we develop the fixed point design of an iterative soft decision based LR-aided K-best decoder, which reduces the complexity of existing sphere decoder. A simulation based word-length optimization is presented for physical implementation of the K-best decoder. Simulations show that the fixed point result of 16 bit precision can keep bit error rate (BER) degradation within 0.3 dB for 8×8 MIMO systems with different modulation schemes.
Efficient pu mode decision and motion estimation for h.264 avc to hevc transc...sipij
H.264/AVC has been widely applied to various applications. However, a new video compression standard,
High Efficient Video Coding (HEVC), had been finalized in 2013. In this work, a fast transcoder from
H.264/AVC to HEVC is proposed. The proposed algorithm includes the fast prediction unit (PU) decision
and the fast motion estimation. With the strong relation between H.264/AVC and HEVC, the modes,
residuals, and variance of motion vectors (MVs) extracted from H.264/AVC can be reused to predict the
current encoding PU of HEVC. Furthermore, the MVs from H.264/AVC are used to decide the search
range of PU during motion estimation. Simulation results show that the proposed algorithm can save up to
53% of the encoding time and maintains the rate-distortion (R-D) performance for HEVC.
Iterative Soft Decision Based Complex K-best MIMO DecoderCSCJournals
This paper presents an iterative soft decision based complex multiple input multiple output (MIMO) decoding algorithm, which reduces the complexity of Maximum Likelihood (ML) detector. We develop a novel iterative complex K-best decoder exploiting the techniques of lattice reduction for 8×8 MIMO. Besides list size, a new adjustable variable has been introduced in order to control the on-demand child expansion. Following this method, we obtain 6.9 to 8.0 dB improvement over real domain K-best decoder and 1.4 to 2.5 dB better performance compared to iterative conventional complex decoder for 4th iteration and 64-QAM modulation scheme. We also demonstrate the significance of new parameter on bit error rate. The proposed decoder not only increases the performance, but also reduces the computational complexity to a certain level.
Projected Barzilai-Borwein Methods Applied to Distributed Compressive Spectru...Polytechnique Montreal
Cognitive radio allows unlicensed (cognitive) users to use licensed frequency bands by exploiting spectrum sensing techniques to detect whether or not the licensed (primary) users are present. In this paper, we present a compressed sensing applied to spectrum-occupancy detection in wide-band applications. The collected analog signals from each cognitive radio (CR) receiver at a fusion center are transformed to discrete-time signals by using analog-to-information converter (AIC) and then employed to calculate the autocorrelation. For signal reconstruction, we exploit a novel approach to solve the optimization problem consisting of minimizing both a quadratic (l2) error term and an l1-regularization term. In specific, we propose the Basic gradient projection (GP) and projected Barzilai-Borwein (PBB) algorithm to offer a better performance in terms of the mean squared error of the power spectrum density estimate and the detection probability of licensed signal occupancy.
In this paper, low linear architectures for analyzing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. The min-sum giving out step is to that it produces only two diverse output magnitude values irrespective of the number of incoming bit-to check communication. These new micro-architecture structures would utilize the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption. The decoding algorithms we propose generalize and unify the decoding schemes originally presented the product codes and those of low-density parity-check codes.
A new efficient way based on special stabilizer multiplier permutations to at...IJECEIAES
BCH codes represent an important class of cyclic error-correcting codes; their minimum distances are known only for some cases and remains an open NP-Hard problem in coding theory especially for large lengths. This paper presents an efficient scheme ZSSMP (Zimmermann Special Stabilizer Multiplier Permutation) to find the true value of the minimum distance for many large BCH codes. The proposed method consists in searching a codeword having the minimum weight by Zimmermann algorithm in the sub codes fixed by special stabilizer multiplier permutations. These few sub codes had very small dimensions compared to the dimension of the considered code itself and therefore the search of a codeword of global minimum weight is simplified in terms of run time complexity. ZSSMP is validated on all BCH codes of length 255 for which it gives the exact value of the minimum distance. For BCH codes of length 511, the proposed technique passes considerably the famous known powerful scheme of Canteaut and Chabaud used to attack the public-key cryptosystems based on codes. ZSSMP is very rapid and allows catching the smallest weight codewords in few seconds. By exploiting the efficiency and the quickness of ZSSMP, the true minimum distances and consequently the error correcting capability of all the set of 165 BCH codes of length up to 1023 are determined except the two cases of the BCH(511,148) and BCH(511,259) codes. The comparison of ZSSMP with other powerful methods proves its quality for attacking the hardness of minimum weight search problem at least for the codes studied in this paper.
Improving The Performance of Viterbi Decoder using Window System IJECEIAES
An efficient Viterbi decoder is introduced in this paper; it is called Viterbi decoder with window system. The simulation results, over Gaussian channels, are performed from rate 1/2, 1/3 and 2/3 joined to TCM encoder with memory in order of 2, 3. These results show that the proposed scheme outperforms the classical Viterbi by a gain of 1 dB. On the other hand, we propose a function called RSCPOLY2TRELLIS, for recursive systematic convolutional (RSC) encoder which creates the trellis structure of a recursive systematic convolutional encoder from the matrix “H”. Moreover, we present a comparison between the decoding algorithms of the TCM encoder like Viterbi soft and hard, and the variants of the MAP decoder known as BCJR or forward-backward algorithm which is very performant in decoding TCM, but depends on the size of the code, the memory, and the CPU requirements of the application.
Fixed Point Realization of Iterative LR-Aided Soft MIMO Decoding AlgorithmCSCJournals
Multiple-input multiple-output (MIMO) systems have been widely acclaimed in order to provide high data rates. Recently Lattice Reduction (LR) aided detectors have been proposed to achieve near Maximum Likelihood (ML) performance with low complexity. In this paper, we develop the fixed point design of an iterative soft decision based LR-aided K-best decoder, which reduces the complexity of existing sphere decoder. A simulation based word-length optimization is presented for physical implementation of the K-best decoder. Simulations show that the fixed point result of 16 bit precision can keep bit error rate (BER) degradation within 0.3 dB for 8×8 MIMO systems with different modulation schemes.
Efficient pu mode decision and motion estimation for h.264 avc to hevc transc...sipij
H.264/AVC has been widely applied to various applications. However, a new video compression standard,
High Efficient Video Coding (HEVC), had been finalized in 2013. In this work, a fast transcoder from
H.264/AVC to HEVC is proposed. The proposed algorithm includes the fast prediction unit (PU) decision
and the fast motion estimation. With the strong relation between H.264/AVC and HEVC, the modes,
residuals, and variance of motion vectors (MVs) extracted from H.264/AVC can be reused to predict the
current encoding PU of HEVC. Furthermore, the MVs from H.264/AVC are used to decide the search
range of PU during motion estimation. Simulation results show that the proposed algorithm can save up to
53% of the encoding time and maintains the rate-distortion (R-D) performance for HEVC.
Iterative Soft Decision Based Complex K-best MIMO DecoderCSCJournals
This paper presents an iterative soft decision based complex multiple input multiple output (MIMO) decoding algorithm, which reduces the complexity of Maximum Likelihood (ML) detector. We develop a novel iterative complex K-best decoder exploiting the techniques of lattice reduction for 8×8 MIMO. Besides list size, a new adjustable variable has been introduced in order to control the on-demand child expansion. Following this method, we obtain 6.9 to 8.0 dB improvement over real domain K-best decoder and 1.4 to 2.5 dB better performance compared to iterative conventional complex decoder for 4th iteration and 64-QAM modulation scheme. We also demonstrate the significance of new parameter on bit error rate. The proposed decoder not only increases the performance, but also reduces the computational complexity to a certain level.
Projected Barzilai-Borwein Methods Applied to Distributed Compressive Spectru...Polytechnique Montreal
Cognitive radio allows unlicensed (cognitive) users to use licensed frequency bands by exploiting spectrum sensing techniques to detect whether or not the licensed (primary) users are present. In this paper, we present a compressed sensing applied to spectrum-occupancy detection in wide-band applications. The collected analog signals from each cognitive radio (CR) receiver at a fusion center are transformed to discrete-time signals by using analog-to-information converter (AIC) and then employed to calculate the autocorrelation. For signal reconstruction, we exploit a novel approach to solve the optimization problem consisting of minimizing both a quadratic (l2) error term and an l1-regularization term. In specific, we propose the Basic gradient projection (GP) and projected Barzilai-Borwein (PBB) algorithm to offer a better performance in terms of the mean squared error of the power spectrum density estimate and the detection probability of licensed signal occupancy.
In this paper, low linear architectures for analyzing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. The min-sum giving out step is to that it produces only two diverse output magnitude values irrespective of the number of incoming bit-to check communication. These new micro-architecture structures would utilize the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption. The decoding algorithms we propose generalize and unify the decoding schemes originally presented the product codes and those of low-density parity-check codes.
A new efficient way based on special stabilizer multiplier permutations to at...IJECEIAES
BCH codes represent an important class of cyclic error-correcting codes; their minimum distances are known only for some cases and remains an open NP-Hard problem in coding theory especially for large lengths. This paper presents an efficient scheme ZSSMP (Zimmermann Special Stabilizer Multiplier Permutation) to find the true value of the minimum distance for many large BCH codes. The proposed method consists in searching a codeword having the minimum weight by Zimmermann algorithm in the sub codes fixed by special stabilizer multiplier permutations. These few sub codes had very small dimensions compared to the dimension of the considered code itself and therefore the search of a codeword of global minimum weight is simplified in terms of run time complexity. ZSSMP is validated on all BCH codes of length 255 for which it gives the exact value of the minimum distance. For BCH codes of length 511, the proposed technique passes considerably the famous known powerful scheme of Canteaut and Chabaud used to attack the public-key cryptosystems based on codes. ZSSMP is very rapid and allows catching the smallest weight codewords in few seconds. By exploiting the efficiency and the quickness of ZSSMP, the true minimum distances and consequently the error correcting capability of all the set of 165 BCH codes of length up to 1023 are determined except the two cases of the BCH(511,148) and BCH(511,259) codes. The comparison of ZSSMP with other powerful methods proves its quality for attacking the hardness of minimum weight search problem at least for the codes studied in this paper.
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...IJERA Editor
In digital communication forward error correction methods have a great practical importance when channel is
noisy. Convolutional error correction code can correct both type of errors random and burst. Convolution
encoding has been used in digital communication systems including deep space communication and wireless
communication. The error correction capability of convolutional code depends on code rate and constraint
length. The low code rate and high constraint length has more error correction capabilities but that also
introduce large overhead. This paper introduces convolutional encoders for various constraint lengths. By
increasing the constraint length the error correction capability can be increased. The performance and error
correction also depends on the selection of generator polynomial. This paper also introduces a good generator
polynomial which has high performance and error correction capabilities.
Mimo radar detection in compound gaussian clutter using orthogonal discrete f...ijma
This paper proposes orthogonal Discrete Frequency Coding Space Time Waveforms (DFCSTW) for
Multiple Input and Multiple Output (MIMO) radar detection in compound Gaussian clutter. The proposed
orthogonal waveforms are designed considering the position and angle of the transmitting antenna when
viewed from origin. These orthogonally optimized show good resolution in spikier clutter with Generalized
Likelihood Ratio Test (GLRT) detector. The simulation results show that this waveform provides better
detection performance in spikier Clutter.
An Energy-Efficient Lut-Log-Bcjr Architecture Using Constant Log Bcjr AlgorithmIJERA Editor
Error correcting codes are used to correct the data from the corrupted signal due to noise and interference. There
are many error correcting codes. Among them turbo codes is considered to be the best because it is very close to
the Shannon theoretical limit. The MAP algorithm is commonly used in the turbo decoder. Among the different
versions of the MAP algorithm Constant log BCJR algorithm have less complexity and good error performance.
The Constant log BCJR algorithm can be easily designed using look up table which reduces the memory
consumption. The proposed Constant log BCJR decoder is designed to decode two blocks of data at a time, this
increases the throughput. The complexity of the decoder is further reduced by the use of the add compare select
(ACS) units and registers. The proposed decoder is simulated using Xilinx ISE and synthesized using Sparten3
FPGA and found out that Constant log BCJR decoder utilized less amount of memory and power than the LUT
log BCJR decoder.
EE402B Radio Systems and Personal Communication Networks notesHaris Hassan
Programmes in which available:
Masters of Engineering - Electrical and Electronic
Engineering. Masters of Engineering - Electronic
Engineering and Computer Science. Master of Science -
Communication Systems and Wireless Networking.
Master of Science - Smart Telecom and Sensing
Networks. Master of Science - Photonic Integrated
Circuits, Sensors and Networks.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
On The Fundamental Aspects of DemodulationCSCJournals
When the instantaneous amplitude, phase and frequency of a carrier wave are modulated with the information signal for transmission, it is known that the receiver works on the basis of the received signal and a knowledge of the carrier frequency. The question is: If the receiver does not have the a priori information about the carrier frequency, is it possible to carry out the demodulation process? This tutorial lecture answers this question by looking into the very fundamental process by which the modulated wave is generated. It critically looks into the energy separation algorithm for signal analysis and suggests modification for distortionless demodulation of an FM signal, and recovery of sub-carrier signals
Frequency domain behavior of S-parameters piecewise-linear fitting in a digit...Piero Belforte
This paper describes PWLFIT+, an extension to the frequency domain ofPWLFIT, a new paradigm in time-domain macromodel ing for linear multiportsystems, based on a piecewise-linea r (PWL) behavioral representation of the S-parameters step response.
Area efficient parallel LFSR for cyclic redundancy check IJECEIAES
Cyclic Redundancy Check (CRC), code for error detection finds many applications in the field of digital communication, data storage, control system and data compression. CRC encoding operation is carried out by using a Linear Feedback Shift Register (LFSR). Serial implementation of CRC requires more clock cycles which is equal to data message length plus generator polynomial degree but in parallel implementation of CRC one clock cycle is required if a whole data message is applied at a time. In previous work related to parallel LFSR, hardware complexity of the architecture reduced using a technique named state space transformation. This paper presents detailed explaination of search algorithm implementation and technique to find number of XOR gates required for different CRC algorithms. This paper presents a searching algorithm and new technique to find the number of XOR gates required for different CRC algorithms. The comparison between proposed and previous architectures shows that the number of XOR gates are reduced for CRC algorithms which improve the hardware efficiency. Searching algorithm and all the matrix computations have been performed using MATLAB simulations.
Multi carrier equalization by restoration of redundanc y (merry) for adaptive...IJNSA Journal
This paper proposes a new blind adaptive channel shortening approach for multi-carrier systems. The
performance of the discrete Fourier transform-DMT (DFT-DMT) system is investigated with the proposed
DST-DMT system over the standard carrier serving area (CSA) loop1. Enhanced bit rates demonstrated
and less complexity also involved by the simulation of the DST-DMT system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Low Complexity Multi-User MIMO Detection for Uplink SCMA System Using Expecta...TELKOMNIKA JOURNAL
Sparse code multiple access (SCMA), which combines the advantages of low density signature
(LDS) and code-division multiple access (CDMA), is regarded as one of the promising modulation technique
candidate for the next generation of wireless systems. Conventionally, the message passing algorithm (MPA)
is used for data detector at the receiver side. However, the MPA-SCMA cannot be implemented in the next
generation wireless systems, because of its unacceptable complexity cost. Specifically, the complexity of
MPA-SCMA grows exponentially with the number of antennas. Considering the use of high dimensional
systems in the next generation of wireless systems, such as massive multi-user MIMO systems, the conventional
MPA-SCMA is prohibitive. In this paper, we propose a low complexity detector algorithm named the
expectation propagation algorithm (EPA) for SCMA. Mainly, the EPA-SCMA solves the complexity problem
of MPA-SCMA and enables the implementation of SCMA in massive MU-MIMO systems. For instance, the
EPA-SCMA also enables the implemantation of SCMA in the next generation wireless systems. We further
show that the EPA can achieve the optimal detection performance as the numbers of transmit and receive
antennas grow. We also demonstrate that a rotation design in SCMA codebook is unnecessary, which is
quite rather different from the general assumption.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Ericsson Sustainability and Corporate Responsibility Report 2012Ericsson
http://www.ericsson.com/thecompany/sustainability_corporateresponsibility
"Many of the world's major challenges - such as urbanization, climate change, and poverty - could benefit from solutions offered by mobile broadband. Sustainability is a competitive differentiator and is high on our agenda, as well as that of our customers. Throughout the value chain and wherever we do business, we are creating value for ourselves and for our stakeholders by striving to be sustainable and responsible in all that we do."Hans Vestberg, President and CEO.
Audio coding of harmonic signals is a challenging task for conventional MDCT coding schemes. In this paper we introduce a novel algorithm for improved transform coding of harmonic audio. The algorithm does not deploy the conventional scheme of splitting the input signal into a spectrum envelope and a residual, but models the spectral peak regions. Test results indicate that the presented algorithm outperforms the conventional coding concept.
There will be more change in the next 10 years than there has been in the previous 100. This paper describes these expected foundational shifts and explains how we can manage them to our advantage.
In their latest discussion presentation "Winning the Game", Geoff Hollingworth, Ericsson North America Evangelist, in collaboration with Jason Hoffman, founder and CTO of Joyent, discuss what these changes will mean for devices, the cloud and the network.
This interactive presentation is supported by 8 videos. It describes the foundational changes that will occur across industries and networks, and attempts to explain how we can manage them to our advantage. The target audience of this paper is those who are involved in planning, building and profitably operating digital networks.
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...IJERA Editor
In digital communication forward error correction methods have a great practical importance when channel is
noisy. Convolutional error correction code can correct both type of errors random and burst. Convolution
encoding has been used in digital communication systems including deep space communication and wireless
communication. The error correction capability of convolutional code depends on code rate and constraint
length. The low code rate and high constraint length has more error correction capabilities but that also
introduce large overhead. This paper introduces convolutional encoders for various constraint lengths. By
increasing the constraint length the error correction capability can be increased. The performance and error
correction also depends on the selection of generator polynomial. This paper also introduces a good generator
polynomial which has high performance and error correction capabilities.
Mimo radar detection in compound gaussian clutter using orthogonal discrete f...ijma
This paper proposes orthogonal Discrete Frequency Coding Space Time Waveforms (DFCSTW) for
Multiple Input and Multiple Output (MIMO) radar detection in compound Gaussian clutter. The proposed
orthogonal waveforms are designed considering the position and angle of the transmitting antenna when
viewed from origin. These orthogonally optimized show good resolution in spikier clutter with Generalized
Likelihood Ratio Test (GLRT) detector. The simulation results show that this waveform provides better
detection performance in spikier Clutter.
An Energy-Efficient Lut-Log-Bcjr Architecture Using Constant Log Bcjr AlgorithmIJERA Editor
Error correcting codes are used to correct the data from the corrupted signal due to noise and interference. There
are many error correcting codes. Among them turbo codes is considered to be the best because it is very close to
the Shannon theoretical limit. The MAP algorithm is commonly used in the turbo decoder. Among the different
versions of the MAP algorithm Constant log BCJR algorithm have less complexity and good error performance.
The Constant log BCJR algorithm can be easily designed using look up table which reduces the memory
consumption. The proposed Constant log BCJR decoder is designed to decode two blocks of data at a time, this
increases the throughput. The complexity of the decoder is further reduced by the use of the add compare select
(ACS) units and registers. The proposed decoder is simulated using Xilinx ISE and synthesized using Sparten3
FPGA and found out that Constant log BCJR decoder utilized less amount of memory and power than the LUT
log BCJR decoder.
EE402B Radio Systems and Personal Communication Networks notesHaris Hassan
Programmes in which available:
Masters of Engineering - Electrical and Electronic
Engineering. Masters of Engineering - Electronic
Engineering and Computer Science. Master of Science -
Communication Systems and Wireless Networking.
Master of Science - Smart Telecom and Sensing
Networks. Master of Science - Photonic Integrated
Circuits, Sensors and Networks.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
On The Fundamental Aspects of DemodulationCSCJournals
When the instantaneous amplitude, phase and frequency of a carrier wave are modulated with the information signal for transmission, it is known that the receiver works on the basis of the received signal and a knowledge of the carrier frequency. The question is: If the receiver does not have the a priori information about the carrier frequency, is it possible to carry out the demodulation process? This tutorial lecture answers this question by looking into the very fundamental process by which the modulated wave is generated. It critically looks into the energy separation algorithm for signal analysis and suggests modification for distortionless demodulation of an FM signal, and recovery of sub-carrier signals
Frequency domain behavior of S-parameters piecewise-linear fitting in a digit...Piero Belforte
This paper describes PWLFIT+, an extension to the frequency domain ofPWLFIT, a new paradigm in time-domain macromodel ing for linear multiportsystems, based on a piecewise-linea r (PWL) behavioral representation of the S-parameters step response.
Area efficient parallel LFSR for cyclic redundancy check IJECEIAES
Cyclic Redundancy Check (CRC), code for error detection finds many applications in the field of digital communication, data storage, control system and data compression. CRC encoding operation is carried out by using a Linear Feedback Shift Register (LFSR). Serial implementation of CRC requires more clock cycles which is equal to data message length plus generator polynomial degree but in parallel implementation of CRC one clock cycle is required if a whole data message is applied at a time. In previous work related to parallel LFSR, hardware complexity of the architecture reduced using a technique named state space transformation. This paper presents detailed explaination of search algorithm implementation and technique to find number of XOR gates required for different CRC algorithms. This paper presents a searching algorithm and new technique to find the number of XOR gates required for different CRC algorithms. The comparison between proposed and previous architectures shows that the number of XOR gates are reduced for CRC algorithms which improve the hardware efficiency. Searching algorithm and all the matrix computations have been performed using MATLAB simulations.
Multi carrier equalization by restoration of redundanc y (merry) for adaptive...IJNSA Journal
This paper proposes a new blind adaptive channel shortening approach for multi-carrier systems. The
performance of the discrete Fourier transform-DMT (DFT-DMT) system is investigated with the proposed
DST-DMT system over the standard carrier serving area (CSA) loop1. Enhanced bit rates demonstrated
and less complexity also involved by the simulation of the DST-DMT system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Low Complexity Multi-User MIMO Detection for Uplink SCMA System Using Expecta...TELKOMNIKA JOURNAL
Sparse code multiple access (SCMA), which combines the advantages of low density signature
(LDS) and code-division multiple access (CDMA), is regarded as one of the promising modulation technique
candidate for the next generation of wireless systems. Conventionally, the message passing algorithm (MPA)
is used for data detector at the receiver side. However, the MPA-SCMA cannot be implemented in the next
generation wireless systems, because of its unacceptable complexity cost. Specifically, the complexity of
MPA-SCMA grows exponentially with the number of antennas. Considering the use of high dimensional
systems in the next generation of wireless systems, such as massive multi-user MIMO systems, the conventional
MPA-SCMA is prohibitive. In this paper, we propose a low complexity detector algorithm named the
expectation propagation algorithm (EPA) for SCMA. Mainly, the EPA-SCMA solves the complexity problem
of MPA-SCMA and enables the implementation of SCMA in massive MU-MIMO systems. For instance, the
EPA-SCMA also enables the implemantation of SCMA in the next generation wireless systems. We further
show that the EPA can achieve the optimal detection performance as the numbers of transmit and receive
antennas grow. We also demonstrate that a rotation design in SCMA codebook is unnecessary, which is
quite rather different from the general assumption.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Ericsson Sustainability and Corporate Responsibility Report 2012Ericsson
http://www.ericsson.com/thecompany/sustainability_corporateresponsibility
"Many of the world's major challenges - such as urbanization, climate change, and poverty - could benefit from solutions offered by mobile broadband. Sustainability is a competitive differentiator and is high on our agenda, as well as that of our customers. Throughout the value chain and wherever we do business, we are creating value for ourselves and for our stakeholders by striving to be sustainable and responsible in all that we do."Hans Vestberg, President and CEO.
Audio coding of harmonic signals is a challenging task for conventional MDCT coding schemes. In this paper we introduce a novel algorithm for improved transform coding of harmonic audio. The algorithm does not deploy the conventional scheme of splitting the input signal into a spectrum envelope and a residual, but models the spectral peak regions. Test results indicate that the presented algorithm outperforms the conventional coding concept.
There will be more change in the next 10 years than there has been in the previous 100. This paper describes these expected foundational shifts and explains how we can manage them to our advantage.
In their latest discussion presentation "Winning the Game", Geoff Hollingworth, Ericsson North America Evangelist, in collaboration with Jason Hoffman, founder and CTO of Joyent, discuss what these changes will mean for devices, the cloud and the network.
This interactive presentation is supported by 8 videos. It describes the foundational changes that will occur across industries and networks, and attempts to explain how we can manage them to our advantage. The target audience of this paper is those who are involved in planning, building and profitably operating digital networks.
Ericsson ConsumerLab: Personal Information EconomyEricsson
In today’s society, companies and organizations have unprecedented possibilities to collect and use people’s personal information. Using this information in the right way enables new revenue streams and increased profit.
But do consumers understand and perceive the value of their personal information? What are the sensitivity involved with an increased use of personal information by enterprises, governments and consumers? The purpose of the Personal Information Economy report by ConsumerLab has been to describe consumers’ understanding, needs, behaviors and attitudes with respect to personal information as an asset.
For more research from the Ericsson ConsumerLab visit: http://www.ericsson.com/thinkingahead/consumerlab
Ericsson ConsumerLab: Connected lifestyles’ expectations identifiedEricsson
http://www.ericsson.com/thinkingahead/consumerlab
This Ericsson ConsumerLab report identifies the key success criteria for the services of tomorrow to satisfy the needs of a connected society.
Ericsson Review: HSPA evolution for future mobile-broadband needsEricsson
As HSPA evolution continues to address the needs of changing user behavior, new techniques develop and become standardized. This article covers some of the more interesting techniques and concepts under study that will provide network operators with the flexibility, capacity and coverage needed to carry voice and data into the future, ensuring HSPA evolution and good user experience.
The 20th century elevated consumption to be central and critical to the economic system, occurring as a consequence of the way production and labor were organized. But by the dawn of the new century, developed societies were grappling with the challenge of creating a more humane form of consumption, driven by three major forces: lower labor demands in technological industry, erosion of the social contract, and the environmental costs of reckless consumption on increasingly scarce natural resources.
http://www.ericsson.com/thinkingahead/networked_society/commerce_reports
Internet of Things from a Networked Society perspectiveEricsson
Like the pre-industrial and industrial worlds that preceded it, the Networked Society represents a fundamental paradigm shift for people, business and society. One in which new resources are continuously discovered, new forms of value are unleashed, and the most basic logics of life and business are transformed as a result. (Ericsson IoT Conference December 2014)
Computers and internet are used to mimic the physical world, but firsthand experiences are increasingly via screens. We now want the physical world to mimic the internet. The physical world should be interactive – like the internet. (Ericsson IoT Conference December 2014)
Ericsson Review: The benefits of self-organizing backhaul networksEricsson
The concept of self-organizing backhaul networks is not yet as widespread as the concept of SON in the context of radio-access networks. There are, however, ways in which backhaul networks can benefit from SON technology to delay investment in new architecture.
Ericsson Technology Review, issue #1, 2016Ericsson
Every morning, I get out of bed and go to work because I believe technology makes a difference. I believe that in the midst of global growth, numerous humanitarian crises, the increasing need for better resource management, and an evolving threat landscape, a new world is emerging. And I believe technology is playing a key role in making that world a better, safer, and healthier place for more people to enjoy. It feels good to be part of that.
Fundamentally, I believe the breakdown of traditional industry boundaries and increased cross-industry collaboration have enabled us to maximize the benefits of technology. Today, Ericsson works with partners in many different industries that all rely on connectivity embedded into their solutions, services, and products. Our early collaborations, which were with utilities and the automotive industry, have led to innovations like the Connected Vehicle Cloud and Smart Metering as a Service.
I am delighted that Harald Ludanek, Head of R&D at Scania (a leading manufacturer of heavy trucks, buses, coaches, and industrial and marine engines) agreed to contribute to this issue. His article on the significance of ICT – how digitalization and mobility will impact the automotive industry and bring about the intelligent transportation system (ITS) – illustrates the importance of new business relationships, ensuring that different sectors create innovative solutions together, and maximize the value they bring to people and society.
Technology is making it easier for people to protect their homes, families, and belongings. The standardization of antitheft systems in automobiles, for example, has led to a decline in car theft in most parts of the world. However, while technology offers improved security, somehow criminal countermeasures manage to keep up. In an article about end-to-end cryptography, a number of Ericsson experts highlight how car theft is no longer carried out with a slim jim and a screwdriver, but rather with highly sophisticated decryption algorithms, smartphones, and illegal access to software keys.
The protection of data – and the people who own it – as it travels across the network has always been a cornerstone of the telecoms industry. But in today’s world, no single organization can maintain end-to-end control over information as it is carried from source to destination, and so upholding the right to privacy is becoming an increasingly complex issue. And with quantum computing posing a threat to our current security systems, our experts point out that this will render certain existing methods of protection useless. Not only do protocols need a shake up, so does software — so it can work in lightweight mode for constrained or hardware-limited devices.
How to implement Internet of Things successfully creating the best experience and insuring successful distribution.
(Ericsson IoT Conference December 2014)
Ericsson ConsumerLab: The one-click ideal - PresentationEricsson
In a digital world, Telecom operators face challenging customer exceptions. Learn more about the ideal one click customer journey and Ericsson ConsumerLab insights.
Performance bounds for unequally punctured terminated convolutional codeseSAT Journals
Abstract
The main objective of this paper is to discuss about the performance of the punctured convolution codes. Generally the punctured
convolution codes are designed for a protection purpose in CDMA.In this paper a systematic method to construct performance upper
bound for a punctured convolution code with an arbitrary pattern is proposed. At the decoder side perfect channel estimation is
assumed and to represent the relation between input information bits and their corresponding Hamming weight outputs a weight
enumerator is defined. Finally the simulations results provided very well agree with their predicted results with the upper bounds.
Keywords:Convolution code, upper bound, weight enumerator, rate matching.
Analysis of LDPC Codes under Wi-Max IEEE 802.16eIJERD Editor
The LDPC codes have been shown to be by-far the best coding scheme capable for transmitting message over noisy channel. The main aim of this paper is to study the behaviour of LDPC codes on under IEEE 802.16e guidelines. The rate- ½ LDPC codes have been implemented on AWGN channel and the result shows that they can be used on such channels with low BER performance. The BER can be further minimized by increasing the block length.
AREA OPTIMIZED FPGA IMPLEMENTATION FOR GENERATION OF RADAR PULSE COM-PRESSION...VLSICS Design
Pulse compression technique is most widely used in radar and communication areas. Its implementation requires an opti-mized and dedicated hardware. The real time implementation places several constraints such as area occupied, power con-sumption, etc. The good design needs optimization of these constraints. This paper concentrates on the design of optimized model which can reduce these. In the proposed architecture a single chip is used for generating the pulse compression se-quence like BPSk, QPSk, 6-PSK and other Polyphase codes. The VLSI architecture is implemented on the Field Programm-able Gate Array (FPGA) as it provides the flexibility of reconfigurability and reprogrammability .It was found that the proposed architecture has generated the pulse compression sequences efficiently while improving some of the parameters like area, power consumption and delay when compared to previous methods.
Reduced Energy Min-Max Decoding Algorithm for Ldpc Code with Adder Correction...ijceronline
In this paper, high linear architectures for analysing the first two maximum or minimum values are of paramount importance in several uses, including iterative decoders. We proposed the adder and LDPC. The min-sum processing step that it gives only two different output magnitude values irrespective of the number of incoming bit-to check messages. These new micro-architecture layouts would employ the minimum number of comparators by exploiting the concept of survivors in the search. These would result in reduced number of comparisons and consequently reduced energy use. Multipliers are complex units and play an important role in finding the overall area, speed and power consumption of digital designs. By using the multiplier we can minimize the parameters like latency, complexity and power consumption.
Spatialization Parameter Estimation in MDCT Domain for Stereo AudioCSCJournals
For representing multi-channel audio at low bit rate parametric coding techniques are used in many audio coding standards. An MDCT domain parametric stereo coding algorithm which represents the stereo channels as the linear combination of the ‘sum’ channel derived from the stereo channels and a reverberated channel generated from the ‘sum ’channel has been reported in literature. This model is inefficient in capturing the stereo image since only four parameters per sub-band is used as spatialization parameters. In this work we improve this MDCT domain parametric coder with an augmented parameter extraction scheme using an additional reverberated channel. We further modify the scheme by using orthogonalized de-correlated channels for analysis and synthesis of parametric stereo. A synthesis scheme with perceptually scaled parameter set is also introduced. Finally we present, subjective evaluation of the different parametric stereo schemes using MUSHRA test and the increased the perceptual audio quality of the synthesized signals are evident from these test results.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
An efficient reconfigurable code rate cooperative low-density parity check co...IJECEIAES
In recent days, extensive digital communication process has been performed. Due to this phenomenon, a proper maintenance of authentication, communication without any overhead such as signal attenuation code rate fluctuations during digital communication process can be minimized and optimized by adopting parallel encoder and decoder operations. To overcome the above-mentioned drawbacks by using proposed reconfigurable code rate cooperative (RCRC) and low-density parity check (LDPC) method. The proposed RCRC-LDPC is capable to operate over gigabits/sec data and it effectively performs linear encoding, dual diagonal form, widens the range of code rate and optimal degree distribution of LDPC mother code. The proposed method optimize the transmission rate and it is capable to operate on 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. The proposed method optimizes the transmission rate and is capable to operate on a 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. the proposed method's implementation has been carried out using MATLAB and as per the simulation result, the proposed method is capable of reaching a throughput efficiency greater than 8.2 (1.9) gigabits per second with a clock frequency of 160 MHz.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This paper presents the bit error rate performance of the low density parity check (LDPC) with the concatenation of convolutional channel coding based orthogonal frequency-division-multiplexing (OFDM) using space time block coded (STBC). The OFDM wireless communication system incorporates 3/4rated convolutional encoder under various digital modulations (BPSK, QPSK and QAM) over an additative white gaussian noise (AWGN) and fading (Raleigh and Rician) channels. At the receiving section of the simulated system, Maximum Ratio combining (MRC) channel equalization technique has been implemented to extract transmitted symbols without enhancing noise power.
Decoding of the extended Golay code by the simplified successive-cancellation...TELKOMNIKA JOURNAL
This paper describes an adaptation of a polar code decoding technique in favor of the extended Golay code. Based on the bridge provided by a permutation matrix between the code words of these two classes of codes, the Golay code can be decoded by any polar code technique. Contrary to the successive-cancellation list technique which is characterized by a serial estimation of the bits, we propose in this work an adaptation of the simplified successive-cancellation list technique to polar codes equivalent to the Golay code. The simulations have achieved the performance of a maximum likelihood decoding, with the low decoding complexity of polar codes, compared to one of the universal decoders of linear codes most known in the literature.
Design and Implementation of Area Optimized, Low Complexity CMOS 32nm Technol...IJERA Editor
A numerically controlled oscillator (NCO) is a digital signal generator which is a very important block in many Digital Communication Systems such as Software Defined Radios, Digital Radio set and Modems, Down/Up converters for Cellular and PCS base stations etc. NCO creates a synchronous, discrete-time, discrete-valued representation of a sinusoidal waveform. This paper implements the development and design of CMOS look up Table based numerically controlled oscillator which improves the performance, reduces the power & area requirement. The design is implemented with CMOS 32 nm Technology with Microwind 3.8 software tool. In addition, it can be used for analog circuit also enables the integration of complete system on chip. This paper also describes the design of a NCO which is of contemporary nature with reasonable speed, resolution and linearity with lower power, low area. For all about Pre Layout simulation has been realized using 32nm CMOS process Technology.
GF(q) LDPC encoder and decoder FPGA implementation using group shuffled beli...IJECEIAES
This paper presents field programmable gate array (FPGA) exercises of the GF(q) low-density parity-check (LDPC) encoder and interpreter utilizing the group shuffled belief propagation (GSBP) algorithm are presented in this study. For small blocks, non-dual LDPC codes have been shown to have a greater error correction rate than dual codes. The reduction behavior of non-binary LDPC codes over GF (16) (also known as GF(q)-LDPC codes) over the additive white Gaussian noise (AWGN) channel has been demonstrated to be close to the Shannon limit and employs a short block length (N=600 bits). At the same time, it also provides a non-binary LDPC (NB-LDPC) code set program. Furthermore, the simplified bubble check treasure event count is implemented through the use of first in first out (FIFO), which is based on an elegant design. The structure of the interpreter and the creation of the residential area he built were planned in very high speed integrated circuit (VHSIC) hardware description language (VHDL) and simulated in MODELSIM 6.5. The combined output of the Cyclone II FPGA is combined with the simulation output.
BER Performance for Convalutional Code with Soft & Hard Viterbi DecodingIJMER
Viterbi decoding has a fixed decoding time. It is well suited to hardware decoder. Hear we proposed Viterbi algorithm with Decoding rate 1/3. Which dynamically improve performance of the channel
Investigative Compression Of Lossy Images By Enactment Of Lattice Vector Quan...IJERA Editor
In the digital era we live in, efficient representation of data generated by a discrete source and its reliable transmission are unquestionable need. In this work we have focused on source coding taking image as source. Lattice Vector Quantization (LVQ) can be used for source coding as well as for channel coding. (LVQ) with Generator Matrix (GM) and codebook is implemented. When implementation using codebook is done, two codebooks are constructed, one with 256 lattice points that are closest to (0,0,0,0) and another with 256 lattice points that are closest to (1,0,0,0). Energy for both the codes is calculated. When we compare the energy of both the codes we find that codes centered at a non lattice point is lower energy code.
A novel and efficient mixed-signal compressed sensing for wide-band cognitive...Polytechnique Montreal
In cognitive radio (CR) networks, unlicensed (cognitive) users can exploit the licensed frequency bands by using spectrum sensing techniques to identify spectrum holes. This paper proposes a distributed compressive spectrum sensing scheme, in which the modulated wide-band converter can apply compressed sensing (CS) directly to analog signals at the sub-Nyquist rate and the central fusion receives signals from multiple CRs and exploits the multiple-measurements-vectors (MMV) subspace pursuit (M-SP) algorithm to jointly reconstruct the spectral support of the wide-band signal. This support is then used to detect whether the licensed bands are occupy or not. Finally, extensive simulation results show the advantages of the proposed scheme. Besides, we also compare the performance of M-SP with M-orthogonal matching pursuit (M-OMP) algorithms.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Similar to MDCT audio coding with pulse vector quantizers (20)
Ericsson Technology Review: Versatile Video Coding explained – the future of ...Ericsson
Continuous innovation in 5G networks is creating new opportunities for video-enabled services for both consumers and industries, particularly in areas such as the Internet of Things and the automotive sector. These new services are expected to rely on continued video evolution toward 8K resolutions and beyond, and on new strict requirements such as low end-to-end latency for video delivery.
The latest Ericsson Technology Review article explores recent developments in video compression technology and introduces Versatile Video Coding (VVC) – a significant improvement on existing video codecs that we think deserves to be widely deployed in the market. VVC has the potential both to enhance the user experience for existing video services and offer an appropriate performance level for new media services over 5G networks.
BRIDGING THE GAP BETWEEN PHYSICAL AND DIGITAL REALITIES
The key role that connectivity plays in our personal and professional lives has never been more obvious than it is today. Thankfully, despite the sudden, dramatic changes in our behavior earlier this year, networks all around the world have proven to be highly resilient. At Ericsson, we’re committed to ensuring that the network platform continues to improve its ability to meet the full range of societal needs as well as supporting enterprises to stay competitive in the long term. We know that greater agility and speed will be essential.
This issue of our magazine includes several articles that explain Ericsson’s approach to future network development, including my annual technology trends article. The seven trends on this year’s list serve as a critical cornerstone in the development of a common Ericsson vision of what future networks will provide, and what sort of technology evolution will be required to get there.
ERIK EKUDDEN
Senior Vice President, Chief Technology Officer and Head of Group Function Technology
Ericsson Technology Review: Integrated access and backhaul – a new type of wi...Ericsson
Today millimeter wave (mmWave) spectrum is valued mainly because it can be used to achieve high speeds and capacities when combined with spectrum assets below 6GHz. But it can provide other benefits as well. For example, mmWave spectrum makes it possible to use a promising new wireless backhaul solution for 5G New Radio – integrated access and backhaul (IAB) – to densify networks with multi-band radio sites at street level.
This Ericsson Technology Review article explains the IAB concept at a high level, presenting its architecture and key characteristics, as well as examining its advantages and disadvantages compared with other backhaul technologies. It concludes with a presentation of the promising results of several simulations that tested IAB as a backhaul option for street sites in both urban and suburban areas.
Ericsson Technology Review: Critical IoT connectivity: Ideal for time-critica...Ericsson
Critical Internet of Things (IoT) connectivity is an emerging concept in IoT development that enables more efficient and innovative services across a wide range of industries by reliably meeting time-critical communication needs. Mobile network operators (MNOs) are in the perfect position to enable these types of time-critical services due to their ability to leverage advanced 5G networks in a systematic and cost-effective way.
This Ericsson Technology Review article explores the benefits of Critical IoT connectivity in areas such as industrial control, mobility automation, remote control and real-time media. It also provides an overview of key network technologies and architectures. It concludes with several case studies based on two deployment scenarios – wide area and local area – that illustrate how well suited 5G spectrum assets are for Critical IoT use cases.
5G New Radio has already evolved in important ways since the 3GPP standardized Release 15 in late 2018. The significant enhancements in Releases 16 and 17 are certain to play a critical role in expanding both the availability and the applicability of 5G NR in both industry and public services in the near future.
This Ericsson Technology Review article summarizes the most notable new developments in releases 16 and 17, grouped into two categories: enhancements to existing features and features that address new verticals and deployment scenarios. This analysis and our insights about the future beyond Release 17 is an important component of our work to help mobile network operators and other stakeholders better understand and plan for the many new 5G NR opportunities that are on the horizon.
Ericsson Technology Review: The future of cloud computing: Highly distributed...Ericsson
The growing interest in cloud computing scenarios that incorporate both distributed computing capabilities and heterogeneous hardware presents a significant opportunity for network operators. With a vast distributed system (the telco network) already in place, the telecom industry has a significant advantage in the transition toward distributed cloud computing.
This Ericsson Technology Review article explores the future of cloud computing from the perspective of network operators, examining how they can best manage the complexity of future cloud deployments and overcome the technical challenges. Redefining cloud to expose and optimize the use of heterogeneous resources is not straightforward, but we are confident that our use cases and proof points validate our approach and will gain traction both in the telecommunications community and beyond.
Ericsson Technology Review: Optimizing UICC modules for IoT applicationsEricsson
Commonly referred to as SIM cards, the universal integrated circuit cards (UICCs) used in all cellular devices today are in fact complex and powerful minicomputers capable of much more than most Internet of Things (IoT) applications require. Until a simpler and less costly alternative becomes available, action must be taken to ensure that the relatively high price of UICC modules does not hamper IoT growth.
This Ericsson Technology Review article presents two mid-term approaches. The first is to make use of techniques that reduce the complexity of using UICCs in IoT applications, while the second is to use the UICCs’ excess capacity for additional value generation. Those who wish to exploit the potential of the UICCs to better support IoT applications have the opportunity to use them as cryptographic storage, to run higher-layer protocol stacks and/or as supervisory entities, for example.
Mobile data traffic volumes are expected to increase by a factor of four by 2025, and 45 percent of that traffic will be carried by 5G networks. To deliver on customer expectations in this rapidly changing environment, communication service providers must overcome challenges in three key areas: building sufficient capacity, resolving operational inefficiencies through automation and artificial intelligence, and improving service differentiation. This issue of ETR magazine provides insights about how to tackle all three.
Ericsson Technology Review: 5G BSS: Evolving BSS to fit the 5G economyEricsson
The 5G network evolution has opened up an abundance of new business opportunities for communication service providers (CSPs) in verticals such as industrial automation, security, health care and automotive. In order to successfully capitalize on them, CSPs must have business support systems (BSS) that are evolved to manage complex value chains and support new business models. Optimized information models and a high degree of automation are required to handle huge numbers of devices through open interfaces.
This Ericsson Technology Review article explains how 5G-evolved BSS can help CSPs transform themselves from traditional network developers to service enablers for 5G and the Internet of Things, and ultimately to service creators with the ability to collaborate beyond telecoms and establish lucrative digital value systems.
Ericsson Technology Review: 5G migration strategy from EPS to 5G systemEricsson
For many operators, the introduction of the 5G System (5GS) to provide wide-area services in existing Evolved Packet System (EPS) deployments is a necessary step toward creating a full-service, future-proof 5GS in the longer term. The creation of a combined 4G-5G network requires careful planning and a holistic strategy, as the introduction of 5GS has significant impacts across all network domains, including the RAN, packet core, user data and policies, and services, as well as affecting devices and backend systems.
This Ericsson Technology Review article provides an overview of all the aspects that operators need to consider when putting together a robust EPS-to-5GS migration strategy and provides guidance about how they can adapt the transition to address their particular needs per domain.
Ericsson Technology Review: Creating the next-generation edge-cloud ecosystemEricsson
The surge in data volume that will come from the massive number of devices enabled by 5G has made edge computing more important than ever before. Beyond its abilities to reduce network traffic and improve user experience, edge computing will also play a critical role in enabling use cases for ultra-reliable low-latency communication in industrial manufacturing and a variety of other sectors.
This Ericsson Technology Review article explores the topic of how to deliver distributed edge computing solutions that can host different kinds of platforms and applications and provide a high level of flexibility for application developers. Rather than building a new application ecosystem and platform, we strongly recommend reusing industrialized and proven capabilities, utilizing the momentum created with Cloud Native Computing Foundation, and ensuring backward compatibility.
The rise of the innovation platform
Society and industry are transforming at an unprecedented rate. At the same time, the network platform is emerging as an innovation platform with the potential to offer all the connectivity, processing, storage and security needed by current and future applications. In my 2019 trends article, featured in this issue of Ericsson Technology Review, I share my view of the future network platform in relation to six key technology trends.
This issue of the magazine also addresses critical topics such as trust enablement, the extension of computing resources all the way to the edge of the mobile network, the growing impact of the cloud in the telco domain, overcoming latency and battery consumption challenges, and the need for end-to-end connectivity. I hope it provides you with valuable insights about how to overcome the challenges ahead and take full advantage of new opportunities.
Ericsson Technology Review: Spotlight on the Internet of ThingsEricsson
The Internet of Things (IoT) has emerged as a fundamental cornerstone in the digitalization of both industry and society as a whole. It represents a huge opportunity not only in economic terms, but also from a global challenges perspective – making it easier for governments, non-governmental organizations and the private sector to address pressing food, energy, water and climate related issues.
5G and the IoT are closely intertwined. One of the biggest innovations within 5G is support for the IoT in all its forms, both by addressing mission criticality as well as making it possible to connect low-cost, long-battery-life sensors.
With this in mind, we decided to create a special issue of Ericsson Technology Review solely focused on IoT opportunities and challenges. I hope it provides you with valuable insights about the IoT-related opportunities available to your organization, along with ideas about how we can overcome the challenges ahead.
Ericsson Technology Review: Driving transformation in the automotive and road...Ericsson
A variety of automotive and transport services that require cellular connectivity are already in commercial operation today, and many more are yet to come. Among other things, these services will improve road safety and traffic efficiency, saving lives and helping to reduce the emissions that contribute to climate change. At Ericsson, we believe that the best way to address the growing connectivity needs of this industry sector is through a common network solution, as opposed to taking a single-segment silo approach.
The latest Ericsson Technology Review article explains how the ongoing rollout of 5G provides a cost-efficient and feature-rich foundation for a horizontal multiservice network that can meet the connectivity needs of the automotive and transport ecosystem. It also outlines the key challenges and presents potential solutions.
This presentation explains the importance of SD-WAN technology as part of the Enterprise digital transformation strategy. It goes over the first wave of SD-WAN in a single vendor deployment, with Do-it-yourself (DIY) as the preferred model. Then continues with the importance of orchestration in the second wave of SD-WAN deployments in a multi-vendor ecosystem, turning to SD-WAN Managed Services as the preferred model. It ends up with some examples of use cases and the Verizon customer case. More information on Ericsson Dynamic orchestration - http://m.eric.sn/6rsZ30psKLu
Ericsson Technology Review: 5G-TSN integration meets networking requirements ...Ericsson
Time-Sensitive Networking (TSN) is becoming the standard Ethernet-based technology for converged networks of Industry 4.0. Understanding the importance and relevance of TSN features, as well as the capabilities that allow 5G to achieve wireless deterministic and time-sensitive communication, is essential to industrial automation in the future.
The latest Ericsson Technology Review article explains how TSN is an enabler of Industry 4.0, and that together with 5G URLLC capabilities, the two key technologies can be combined and integrated to provide deterministic connectivity end to end. It also discusses TSN standards and the value of the TSN toolbox for next generation industrial automation networks.
Ericsson Technology Review: Meeting 5G latency requirements with inactive stateEricsson
Low latency communication and minimal battery consumption are key requirements of many 5G and IoT use cases, including smart transport and critical control of remote devices. Thanks to Ericsson’s 4G/5G research activities and lessons learned from legacy networks, we have identified solutions that address both of these requirements by reducing the amount of signaling required during state transitions, and shared our discoveries with the 3GPP.
This Ericsson Technology Review article explains the why and how behind the new Radio Resource Control (RRC) state model in the standalone version of the 5G New Radio standard, which features a new, Ericsson-developed state called inactive. On top of overcoming latency and battery consumption challenges, the new state also increases overall system capacity by decreasing the processing effort in the network.
Ericsson Technology Review: Cloud-native application design in the telecom do...Ericsson
Cloud-native application design is set to become standard practice in the telecom industry in the near future due to the major efficiency gains it can provide, particularly in terms of speeding up software upgrades and releases. At Ericsson, we have been actively exploring the potential of cloud-native computing in the telecom industry since we joined the Cloud Native Computing Foundation (CNCF) a few years ago.
This Ericsson Technology Review article explains the opportunities that CNCF technology has enabled, as well as unveiling key aspects of our application development framework, which is designed to help navigate the transition to a cloud-native approach. It also discusses the challenges that the large-scale reuse of open-source technology can raise, along with key strategies for how to mitigate them.
Ericsson Technology Review: Service exposure: a critical capability in a 5G w...Ericsson
To meet the requirements of use cases in areas such as the Internet of Things, AR/VR, Industry 4.0 and the automotive sector, operators need to be able to provide computing resources across the whole telco domain – all the way to the edge of the mobile network. Service exposure and APIs will play a key role in creating solutions that are both effective and cost efficient.
The latest Ericsson Technology Review article explores recent advances in the service exposure area that have resulted from the move toward 5G and the adoption of cloud-native principles, as well as the combination of Service-based Architecture, microservices and container technologies. It includes examples that illustrate how service exposure can be deployed in a multitude of locations, each with a different set of requirements that drive modularity and configurability needs.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
MDCT audio coding with pulse vector quantizers
1. MDCT AUDIO CODING WITH PULSE VECTOR QUANTIZERS
Jonas Svedberg, Volodya Grancharov, Sigurdur Sverrisson, Erik Norvell, Tomas Toftgård, Harald
Pobloth, and Stefan Bruhn
SMN, Ericsson Research, Ericsson AB
164 80, Stockholm, Sweden
ABSTRACT
This paper describes a novel audio coding algorithm that is a
building block in the recently standardized 3GPP EVS codec [1].
The presented scheme operates in the Modified Discrete Cosine
Transform (MDCT) domain and deploys a Split-PVQ pulse coding
quantizer, a noise-fill, and a gain control optimized for the
quantizer’s properties. A complexity analysis in terms of WMOPS
is presented to illustrate that the proposed Split-PVQ concept and
dynamic range optimized MPVQ-indexing are suitable for real-
time audio coding. Test results from formal MOS subjective
evaluations and objective performance figures are presented to
illustrate the competitiveness of the proposed algorithm.
Index Terms— Audio Coding, MDCT, PVQ, Noise-Fill, EVS
1. INTRODUCTION
In the art of high quality speech and audio coding there is a need to
efficiently quantize the residual signals after an initial dynamic
compression (or prediction) step. At high rates a scalar quantizer
(SQ) followed by entropy coding has been in widespread use [2],
while at medium rates trained codebooks [3] or lattice vector
quantizers (LVQs) [4] have been in use. The main disadvantage
with trained codebooks is that they do not scale well to higher rates
in terms of storage and search complexity. Scalar quantization with
entropy coding has the drawback that the entropy statistics needs to
be stored and also trained on the correct material, and the output is
by nature variable rate (requiring a rate control loop). For low rate
speech coding fixed rate sparse, multi-pulse and phase structured
pulse vector quantizers have been in successful use since the early
90’s [5]. At low rates they have provided a good trade-off between
performance and implementation complexity in terms of both
storage and cycles, however without an extendable structure, they
have been difficult to apply to higher bit rates. An alternative to
pulse vector quantizers at higher rates are lattice quantizers, such
as D8 [6] and rotated E8 [4]. Due to their regular structure they
have very good search properties. However, the handling of lattice
outliers and the task of indexing a point in an extended lattice and
reconstructing the point from the received index may be
cumbersome. Further lattice quantizers require entropy coding to
provide good performance [7], and they have a dimensional
rigidity, using fixed vector lengths such as 8 or 24 [8].
The Pyramid Vector Quantizer (PVQ) is a structured pulse
vector quantizer introduced in the 80’s by Fischer [9] [10] (for
Laplacian sources). The pyramid structure enables an efficient
search, and by limiting the indexing problem to smaller controlled
size sub-sections in [11], [12] and [13], it has been shown that it
may give a good tradeoff between performance and complexity.
When employed in a transform domain codec, the energy
focusing property of a pulse based quantizer may at lower rates
result in uncovering perceptual artifacts, such as local energy
losses, which need to be addressed in a post-processing step. In
this paper we first give an overview of the MDCT based codec,
followed by a presentation of novel modules used to quantize and
post-process the band normalized transform coefficients using a
PVQ. Finally, to demonstrate the performance of the introduced
Split-PVQ concept and the pulse coding optimized noise-fill,
subjective and objective measurements are presented.
2. CODEC OVERVIEW
The proposed MDCT audio codec is operating along the lines of
the ITU-T G.719 framework [6]. As a coding mode in the EVS
codec, it handles sample rates of 16/32/48 kHz (WB/SWB/FB) and
is used at bit rates of 24.4, 32 and 64 kbps. An overview of the
system is depicted in Figure 1. Similar to the G.719 scheme it
operates on overlapping input audio frames )(tx but here a
reduced algorithmic delay is achieved by using an alternative
asymmetric window [14] which has a smaller overlap compared to
the standard sinusoid window. The windowed, time-aliased input
)(~ tx is transformed to MDCT[15] coefficients )(ky using DCTIV:
( )( )∑
−
=
−=
++=
1
0
1,,0,2/12/1cos)(~)(
L
n
Lk
L
kttxky 2
π
. (1)
errgˆ
Norm calculation
and coding
Bit
allocation
Adaptive
windowing
and
transform
Spectrum
normalization
PVQ
gain & shape
quantization
Norm
decoding
Bit
allocation
Inverse
transform and
overlap-add
Spectrum
scaling
PVQ
gain & shape
decoding
ENCODER
DECODER
)(tx
)(ˆ tx
)(ˆ bny
)(bny
)(ˆ bE )(bB
)(ˆ bE
)(bB
)(ky )(ˆ bny
Noise-fill
errgˆ
Figure 1: Overview of the MDCT based codec system
2. The MDCT spectrum is partitioned into bands, where each band b
is associated with a norm factor )(bE , a quantized norm )(ˆ bE and
a norm index )(bI . The set of )(bI are input to a bit distribution
algorithm to produce a bit allocation )(bB . Each band is
normalized using the quantized norm )(ˆ/)()( bEbbn yy = , with
[ ]T
bbb eysysyb )(,),1(),()( +=y , where bs and be denote the
start and end indices of band b . The band structure for 48 kHz is
the same as in [6], but a complete list of band structures can be
found in [14]. The normalized bands are then encoded using the
PVQ described in section 3. Additional fine gain adjustments are
done for each band as outlined in section 4. The decoder may
reconstruct the encoded spectrum from the encoded parameters by
reversing the steps of the encoder. The bands with zero allocated
bits 0)( =bB are filled using methods described in section 5.
3. QUANTIZER
The normalized MDCT-coefficients in each band are quantized
using a Split-PVQ concept. Bands where the allocated bits per
band (excluding bits for fine gain coding) )(bR are less than a
threshold are directly quantized into an energy normalized PVQ
vector nyˆ . Bands with a higher number of allocated bits are first
split into smaller target segments using an adaptive split
methodology and then each smaller segment is coded. In both
cases the resulting codewords are subsequently encoded by a range
encoder [16] supporting both uniform and tailor made probability
density functions (PDFs).
3.1. Split-PVQ Methodology
Pulse position coding often results in large codeword indices,
especially for long input vectors, due to the rapidly increasing
number of combinations for large dimensions. For low complexity
implementations a process of distributing the available bits
between shorter sub segments can be used. Earlier examples of
binary recursive splitting approaches can be found in [11] and in
the indexing of [12]. An issue with those approaches is that the
segmentation may result in a split vector with very different
segment sizes. This may result in very inefficient positional coding.
To demonstrate the inefficiency of non-uniform segmentation,
consider a split of a 16 dimensional vector in two ways: A)
symmetric (8+8) and B) asymmetric (2+14). Assume there are 2
pulses to code in each segment, then not considering overlap and
sign, the number of codewords in each segment is )!(!/! KNKN − ,
where N is the segment dimension and K is the number of unit
pulses. This results in A) 28+28 = 56 combinations, and B) 1+91 =
92 combinations, which motivates the usage of more symmetric
segmentations. In the presented scheme, an algorithm for band
splitting is activated when the bit budget for a particular
band )(bR exceeds a pre-determined threshold. The input vector
)(bny is split into uniform (or close to uniform) segments segy in a
non-recursive way. The number of segments is calculated as:
)/()( splitcostsegmaxp RRbRN += , (2)
where 32=segmaxR is the maximum allowed number of bits for a
segment and splitcostR is an experimentally determined split
overhead cost. If the number of allocated bits for each segment is
close to segmaxR and the variance of the segment energies is high,
the number of splits is increased by one, allowing for higher
flexibility in the segment bit allocation.
Each segment’s shape is individually quantized and energy
ratios describe the relation between the segment energies. A
difficulty in this approach comes from the fact that energy
variations between different segments might be large, which is
unfavorable for the coding of relative energies. This is managed by
calculating angles representing energy ratios between a left and
right group of segments, with as equal number of segments as
possible, using a recursive top-down approach. At level l of the
recursion the angle is determined as ( ) ,arctan
= l
L
l
R
l
EEa
where l
LE and l
RE are the energies of the left and right groups. In
case of even pN , the first level angle will represent the energy
ratio between a left and right group with an equal number of
segments. At each step of the recursion additional angles within
each group are determined until the left and the right groups
consist of one segment each. In case of an odd number of splits,
the angle calculated in the first iteration will have a larger number
of segments in the right group than in the left.
The angles are additionally used to recursively distribute bits
to the already determined segments. At each level of recursion the
bits l
LR and l
RR allocated to the left and the right groups are
determined from the available number of bits l
R (after angle
coding), the lengths of the groups l
LL and l
RL (number of
coefficients), and the angle l
α by:
( )( ) .,
1
tanlog2 l
L
ll
Rl
L
l
R
l
R
ll
l
L RRR
LL
LR
R −=
+
⋅−
=
a
(3)
E.g. if the allocated bits for the PVQ )(bR require a split into
three segments, the angle calculation and bit distribution is done as
follows:
Step 0: The angle 0
α is used to distribute the bits 0
R , i.e.
)(bR minus the bits used to code 0
α , to the left and the right
groups of segments ( 0
LR and 0
RR ). The left group includes just one
segment while the right group includes two segments.
Step 1: The right group from Step 0 is split in two groups
where the angle 1
Rα is used to distribute the remaining bits of 0
RR
(after coding of 1
Rα ) to 1
RLR (the second segment), 1
RRR (the third
segment). The bits 0
LR are directly used to code the first segment.
3.2. Normalization and Shape Search
For quantization of each split segment segy we employ a unit
hyper-sphere normalization approach similar to the concept in [8],
however now applied in the PVQ-context (as in [12]). The
quantized split segment ),,(ˆ KNgsegsegy is defined as:
KNKN
KN
segsegseg gKNg
,,
,
),,(ˆ
zz
z
y
T
= , (4)
512,,1,64,,1,:
1
0
, 22 ==
== ∑
−
=
KNKee
N
i
iKNz , (5)
3. where KN,z is an integer point on the surface of an N -dimensional
hyper-pyramid such that the L1 norm of KN,z is K , and segg is
the segment gain ( )cos(α or )sin(α at the lowest split level ). The
goal of the PVQ search is to minimize the shape error between the
split segments segy and segyˆ . The search is performed by first
using a projection [9] to a lower sub-pyramid, followed by an
iterative approach (as described in [12]) of evaluating the addition
of a single unit pulse to each dimension until the target pyramid is
reached. Further, the search keeps track of the previous pyramid’s
maximum accumulated amplitude, to speed up the innermost loop
in a fixed point implementation.
3.3. Split-PVQ Codeword Encoding
For each split angleα , angle bits are allocated based on the total
number of bits allocated for the two groups of segments and the
length RL LL + . The angles are uniformly quantized and then
encoded by the range encoder. Efficient coding is achieved by
modelling the angle statistics using an asymmetric triangular PDF,
where the asymmetry is given by the ratio RL LL / .
The KN ,z vector is then indexed using a method that has
similarities to both the product code enumeration in [11] and the
optimized Fischer recursion in [12]. Two shape codewords are
composed as follows: a first codeword representing the first sign
encountered in the vector independent of its position; a second
codeword representing (in a recursive fashion) all the remaining
pulses in the remaining vector which is now guaranteed to have a
leading positive pulse. The second codeword is enumerated using
the structure displayed in Table 1. The recursive structure defines
the ),( KNU offset matrix and enables the recursion computations
to stay within the 1−B dynamics of a B bits signed integer.
Lead value Section size Section definition
K 1
The all pulses consumed case;
zeroes in remaining dimensions
1
1
−K
),(2 KNU⋅
All initial pulse amplitude cases
with a subsequent new leading
sign, (positive or negative).
0
0
),1( KNNMPVQ −
The no initial pulse consumed
cases; the current leading sign
is kept for the next dimension.
Table 1: Modular-PVQ (MPVQ) indexing structure
From Table 1 we find that the total number of entries (with the
very first leading sign information removed) can be expressed as:
),1(),(21),( KNNKNUKNN MPVQMPVQ −+⋅+= (6)
Combining (6) with Fischer’s PVQ-recursion in [9], we find that
the total number of entries can be expressed as:
)1,(),(1),( +++= KNUKNUKNNMPVQ (7)
Runtime computed values of the ),( KNU matrix may now be used
as the basis for the MPVQ-indexing and the update of the
symmetric U matrix from row 1−N to row N can be performed as:
),()1,1(),1(1)1,( KNUKNUKNUKNU ++−+−+=+ , (8)
with initial conditions, 0),1(),0()1,()0,( ==== KUKUNUNU .
The two short codewords of size 1 and size 1−B are finally sent to
the range encoder using a uniform PDF.
4. GAIN CONTROL
The Split-PVQ is a pure shape quantizer and a fine gain is needed
to scale the quantized shape to the correct level. To determine the
fine gain the allocated bits )(bB are split into bits for the PVQ
quantizer )(bR and bits for a fine gain quantizer )(bRFG according
to:
( )
,)()()(
7,/)(xam)(
))(()(
−=
=
=
bRbBbR
LbBbR
bRbitsbR
FG
bsamp
sampFGFG
(9)
where )( sampFG Rbits is a lookup-table for fine gain bits. After
quantizing the shape vector, a fine gain prediction is made using an
accuracy measure )(ba based on the bandwidth bL , the total
number of pulses used in the Split-PVQ encoding )(bN pulses and
the highest identified integer pulse amplitude )(max bP in the set of
encoded shape vectors KN,z , forming )(ˆ bny , according to:
⋅=
−=
.)(/)()(
)(/05.01)(
bPLbNba
babg
maxbpulses
pred
(10)
For bands where 0)( >bRFG , a fine gain prediction error is
computed as )()()( bgbgbg predfgerr = , where the fine gain is
1
))(ˆ)(ˆ()(ˆ)()( −
⋅= bbbbbg n
T
nn
T
nfg yyyy , and )(ˆ bny has been
normalized to unit root mean square (RMS). The gain prediction
error )(bgerr is encoded with a non-uniform scalar quantizer in
logarithmic domain using )(bRFG bits. The decoder reconstructs
the fine gain adjustment through:
)()(ˆ)(ˆ bgbgbg prederrfg ⋅= , (11)
with 1)(ˆ =bgerr when 0)( =bRFG . The final scaling of the
reconstructed band is obtained as:
)(ˆ)(ˆ)(ˆ)(ˆ bgbEbb fgn ⋅⋅= yy . (12)
5. NOISE-FILL
The use of a pulse based VQ may give a peaky and sparse fine
structure which may be less suitable for noise-like signals. The
main spectral filling concept is to use the encoded fine structure to
fill the unencoded bands. This way a similarity to the surrounding
fine structure is preserved. In case of low spectral stability at lower
SWB rates, a two-stage densification procedure is used, as
illustrated in Figure 2.
Figure 2: Dual spectral fill codebook generation
Quantized band
Non-quantized band
1Y
2Y
Compression & Inclusion decision
Densification
4. A compression of the coded residual vectors is performed as:
=
≠
=
,0ˆ,0
0ˆ)),(ˆ(sign
)(ˆ ,
n
nn
compn
y
yky
ky (13)
where )(ˆ kyn refers to elements in )(ˆ bny . The virtual codebook
which constitutes the spectral codebook is built from only the non-
sparse sub-vectors of )(ˆ kyn,comp where each sub-vector has a length
of 8. Compressed sub-vectors that do not fulfill a sparsity criterion:
8
,...,0,2)(ˆ
78
8
,
b
is
isk
compn
L
iky
b
b
=≥∑
++
+=
, (14)
are rejected. The remaining compressed sub-vectors are
concatenated into codebook )(1 kY , of length YL . For a second
codebook )(2 kY , YLk ,,0 = a densification process is used:
( ) ( )
=−
≠−+×
=
.0)(,)(
0)(,)()()(sign
)(
11
1111
2
kkL
kkLkk
k
Y
Y
YY
YYYY
Y (15)
Figure 3 depicts the filling procedure, where )(1 kY is used below
the crossover frequency cbf = 4.8 kHz and )(2 kY is used above.
The spectral fill above the last encoded band is handled the same
way as in [6] for 64 kbps, while for 24.4 and 32 kbps there are
complementary spectral fill methods in use, as described in [14].
Quantized band Filled bandFilled band
cbf
1Y 2Y
2Y1Y
Figure 3: Spectral filling from two codebooks
6. PERFORMANCE
6.1. Objective Performance
Figure 4 displays a SNR-comparison for mixed and music content
in relation to an RE8+D8 based lattice quantizer [6]. The evaluated
Split-PVQ includes both the shape and the fine gain as the lattice
quantizer captures both shape and gain of the quantized vectors.
6
10
14
18
22
26
16 32 48 64 80 96
SegmentalSNR[dB]
Bit rate [kbps]
Split-PVQ
RE8+D8
(G.719)
Figure 4: Segmental SNR (per frame) vs. bit rate
In Figure 5, the complexity in terms of fixed point WMOPS [17]
for the described encoder side quantization algorithms is shown.
0%
50%
100%
24.4 32 48 64 96
(4.3) (5.6) (8.1) (10.0) (12.6)
Bit rate[kbps]
Split-encoding
Pulse-search
MPVQ-indexing
Range-encoding
PVQ-normalization
Gain-prediction
( Total [WMOPS] )
Figure 5: Encoding WMOPS vs. bit rate
At the decoder side the MPVQ de-indexing complexity is on
average 25% heavier than the indexing, and the range decoder
complexity is roughly 150% of the range encoding complexity.
6.2. Subjective Evaluation
In addition to the EVS selection tests [18], numerous tests by the
proponent companies have been performed. The listening
environment, levels, etc. in the proponents’ test are described in
[19]. The performance of the technologies described in this paper
is exemplified by the performance of the EVS codec for WB mixed
and music content. Table 2 below gives average results for two
P.800 DCR [20] subjective tests of the EVS codec performed by
the proponents for WB mixed and music content with 20 listeners
in each test. In addition, for SWB mixed and music content, the
proponents observed results that are not worse than the original for
64 kbps. For 32 kbps some tests revealed statistical significant
improvements compared to G.719@32kbps while other tests only
concluded an increase of the average MOS score within the
confidence region. It should be noted that for SWB at rates 24.4
and 32 kbps a larger portion of the band is subject to spectral fill
using methods which are outside the scope of this paper. Hence,
the WB results are deemed more relevant for the techniques
described here.
Bit rate MOS score
32 kbps G.722.1 EVS
4.18 4.42
64 kbps G.722 EVS
4.20 4.54
Table 2: Subjective MOS scores (WB P.800 DCR)
7. CONCLUSIONS
The objective and subjective experiments confirm that the
proposed novel MDCT-codec design is superior to several legacy
coding schemes. The complexity of the introduced Split-PVQ
concept, the dynamic range optimized MPVQ indexing, and the
efficient fine gain prediction allows their usage for real-time
coding applications. The results from formal MOS subjective
evaluations indicate that the designed system including the novel
noise-fill strategy, tailored for pulse-coding schemes, provides a
significant perceptual improvement.
5. 8. REFERENCES
[1] 3GPP TS 26.441, “EVS Codec General Overview”, 2014
[2] 3GPP TS 26.401, “Enhanced aacPlus general audio codec
General Description”, 2004
[3] K. Ikeda, T. Moriya, N. Iwakami, S. Miki, ”A design of Twin
VQ audio-codec for personal communication systems”, IEEE
ICUPC 1995, pp 803-807, 1995
[4] J. Makinen, B. Bessette, S. Bruhn, P. Ojala, R. Salami, A.
Taleb, “AMR-WB+: a new audio coding standard for 3rd
generation mobile audio services”, ICASSP 2005, pp.
ii/1109- ii/1112 , Vol. 2, 2005
[5] R. Salami, C. LaFlamme, J-P. Adoul, D. Massaloux, “A Toll
Quality 8 Kb/s Speech Codec for the Personal
Communcations System (PCS)”, IEEE Trans. on Vehicular
Technology, Vol.43, NO. 3, August 1994
[6] ITU-T Rec. G.719, “Low-complexity full-band audio coding
for high-quality conversational applications”, 2008
[7] A. Gerso, R. Gray, ”Vector Quantization and Signal
Compression”, Kluwer Academic Publishers, 1992
[8] J.P. Adoul, C. Lamblin, A. Leguader, “Baseband speech
Coding at 2400bps using ‘Spherical Vector Quantization’”,
ICASSP 1984, pp. 45-48, Vol. 9, 1984
[9] T.R. Fischer, “A Vector Laplacian Quantizer”, Proceedings
21st
annual Allerton Conf. on Comm., Control and
Computing, October 5-7, pp 501-510, 1983
[10] T. Fischer, “A Pyramid Vector Quantizer,” IEEE Trans. Inf.
Theory, vol IT-32, NO. 4, 1986
[11] A.C. Hung, T.H-Y Meng, “Error Resilient Pyramid Vector
Quantization for Image Compression”, in Proc. IEEE Int.
Conf. Image Processing, 1994, pp 583-587, 1994
[12] J.-M. Valin, T. B. Terriberry, G. Maxwell, “A Full-
Bandwidth Audio Codec with Low Complexity and Very
Low Delay”, Proc. EUSIPCO, 2009
[13] J-M. Valin, K. Vos , T. Terriberry, “Definition of the Opus
Audio Codec”, IETF/RFC6716, September 2012
[14] 3GPP TS 26.445, “EVS Codec Detailed Algorithmic
Description”, 2014
[15] H. S. Malvar, “Signal Processing with Lapped Transforms”,
Norwood, MA, Artech House, 1992
[16] G. Nigel, N. Martin, “Range encoding: An algorithm for
removing redundancy from a digitized message”, Video &
Data Recording Conf., 1979
[17] ITU-T Rec. G.191, “Software tools for speech and audio
coding standardization”, “BASOP: ITU-T Basic Operators”,
2009
[18] 3GPP Tdoc S4-141065, “Report of the Global Analysis Lab
for the EVS Selection Phase”, Dynastat Inc., 2014
[19] 3GPP Tdoc AHEVS-311, “EVS Permanent Document EVS-
8b: Test plans for selection phase including lab task
specification”, 2014
[20] ITU-T Rec. P.800, “Methods for Subjective Determination
of Transmission Quality”, 1996