H.264/AVCis currently the most widely adopted video coding standard due to its high compression capability and flexibility. However, compressed videos are highly vulnerable to channel errors which may result in severe quality degradation of a video. This paper presentsa concealment aware Unequal Error Protection (UEP) scheme for H.264 video compression using Reed Solomon (RS) codes. The proposed UEP technique assigns a code rate to each Macroblock (MB) based on the type of concealment and a Concealment Dependent Index (CDI). Two interleaving techniques, namely Frame Level Interleaving (FLI) and Group Level Interleaving (GLI) have also been employed. Finally, prioritised concealment is applied in cases where error correction is beyond the capability of the RS decoder. Simulation results have demonstrated that the proposed framework provides an average gain of 2.96 dB over a scheme that used Equal Error Protection (EEP).
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
FPGA Implementation of LDPC Encoder for Terrestrial TelevisionAI Publications
The increasing data rates in digital television networks increase the demands on data capacity of the current transmission channels. Through new standards, the capacity of existing channels is increased with new methods of error correction coding and modulation. In this work, Low Density Parity Check (LDPC) codes are implemented for their error correcting capability. LDPC is a linear error correcting code. These linear error correcting codes are used for transmitting a message over a noisy transmission channel. LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over noisy channels. These codes are capable of performing near to Shannon limit performance and have low decoding complexity. LDPC uses parity check matrix for its encoding and decoding purpose. The main advantage of the parity check matrix is that it helps in detecting and correcting errors which is a very important advantage against noisy channels. This work presents the design and implementation of a LDPC encoder for transmission of digital terrestrial television according to the Chinese DTMB standard. The system is written in Verilog and is implemented on FPGA. The whole work is then verified with the help of Matlab modelling.
Survey on Error Control Coding TechniquesIJTET Journal
This document discusses various error control coding techniques used to ensure correct data transmission over noisy channels. It describes automatic repeat request and forward error correction as the two main approaches. Specific coding schemes covered include parity codes, Hamming codes, BCH codes, Reed-Solomon codes, LDPC codes, convolutional codes, and turbo codes. Reed-Solomon codes can correct multiple burst errors with high code rates. LDPC codes provide performance close to the Shannon limit with lower complexity than turbo codes. The document provides an overview of the coding techniques and their encoding and decoding processes.
Multicasting Of Adaptively-Encoded MPEG4 Over Qos-Cognizant IP NetworksEditor IJMTER
we propose a novel architectural planning for multicasting of adaptively-encoded
layered MPEG4 over a QoS-aware IP network. We re-quire a QoS-aware IP network in this case to
(1) Support priority dropping of packets in time of congestion. (2) Provide congestion notification to
the multicast sender. For the first requirement, we use RED's extension for service differentiation. It
recognizes the priority of packets when they need to be dropped and drops lower priority packets
first. We couple RED with our proposal for the second requirement which is the adoption of
Backward Explicit Congestion Notification (BECN) for use with IP multicast. BECN will provide
early congestion notification at the IP layer level to the video sender. BECN detects upcoming
congestion based on size of the RED queue in the routers. The MPEG4 adaptive-encoder can change
the sending rate and also can divide the video packets into lower priority packets and high priority
packets. Based on BECN messages from the routers, a simple flow controller at the sender sets the
rate for the adaptive MPEG4 encoder and also sets the ratio between the high priority and low
priority packets within the video stream. We use a TES model for generating the MPEG4 traffic that
is based on real video traces. Simulation results show that combining priority dropping, MPEG4
adaptive encoding, and multicast BECN: (1) Improves bandwidth utilization (2) Reduces time to
react to congestion and hence improves the received video quality (3) Maintains graceful degradation
in quality with congestion and provides minimum quality even if congestion persists.
Hardware implementation of (63, 51) bch encoder and decoder for wban using lf...ijitjournal
Error Correcting Codes are required to have a reliable communication through a medium that has an
unacceptable bit error rate and low signal to noise ratio. In IEEE 802.15.6 2.4GHz Wireless Body Area
Network (WBAN), data gets corrupted during the transmission and reception due to noises and
interferences. Ultra low power operation is crucial to prolong the life of implantable devices. Hence simple
block codes like BCH (63, 51, 2) can be employed in the transceiver design of 802.15.6 Narrowband PHY.
In this paper, implementation of BCH (63, 51, t = 2) Encoder and Decoder using VHDL is discussed. The
incoming 51 bits are encoded into 63 bit code word using (63, 51) BCH encoder. It can detect and correct
up to 2 random errors. The design of an encoder is implemented using Linear Feed Back Shift Register
(LFSR) for polynomial division and the decoder design is based on syndrome calculator, inversion-less
Berlekamp-Massey algorithm (BMA) and Chien search algorithm. Synthesis and simulation were carried
out using Xilinx ISE 14.2 and ModelSim 10.1c. The codes are implemented over Virtex 4 FPGA device and
tested on DN8000K10PCIE Logic Emulation Board. To the best of our knowledge, it is the first time an
implementation of (63, 51) BCH encoder and decoder carried out.
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
This document summarizes a research paper that investigates the effects of different group of pictures (GOP) sizes on the quality of reconstructed multiview video content transmitted over error prone channels. It finds that while larger GOP sizes allow for greater compression, they can also propagate errors spatially and temporally. It proposes a multi-layer data partitioning technique to make the multiview video bitstream more robust to errors, and implements frame copy error concealment in the decoder to replace lost information. Simulation results show the multi-layer approach performs better than standard H.264 data partitioning at higher error rates.
This document summarizes forward error correction techniques. It discusses how FEC works by adding redundant data to transmitted messages to allow errors to be detected and corrected without retransmission. It then describes various types of FEC coding including block coding, convolutional coding, turbo codes, and low-density parity check coding. It also discusses how techniques like concatenating codes and interleaving can be used to further reduce errors.
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
FPGA Implementation of LDPC Encoder for Terrestrial TelevisionAI Publications
The increasing data rates in digital television networks increase the demands on data capacity of the current transmission channels. Through new standards, the capacity of existing channels is increased with new methods of error correction coding and modulation. In this work, Low Density Parity Check (LDPC) codes are implemented for their error correcting capability. LDPC is a linear error correcting code. These linear error correcting codes are used for transmitting a message over a noisy transmission channel. LDPC codes are finding increasing use in applications requiring reliable and highly efficient information transfer over noisy channels. These codes are capable of performing near to Shannon limit performance and have low decoding complexity. LDPC uses parity check matrix for its encoding and decoding purpose. The main advantage of the parity check matrix is that it helps in detecting and correcting errors which is a very important advantage against noisy channels. This work presents the design and implementation of a LDPC encoder for transmission of digital terrestrial television according to the Chinese DTMB standard. The system is written in Verilog and is implemented on FPGA. The whole work is then verified with the help of Matlab modelling.
Survey on Error Control Coding TechniquesIJTET Journal
This document discusses various error control coding techniques used to ensure correct data transmission over noisy channels. It describes automatic repeat request and forward error correction as the two main approaches. Specific coding schemes covered include parity codes, Hamming codes, BCH codes, Reed-Solomon codes, LDPC codes, convolutional codes, and turbo codes. Reed-Solomon codes can correct multiple burst errors with high code rates. LDPC codes provide performance close to the Shannon limit with lower complexity than turbo codes. The document provides an overview of the coding techniques and their encoding and decoding processes.
Multicasting Of Adaptively-Encoded MPEG4 Over Qos-Cognizant IP NetworksEditor IJMTER
we propose a novel architectural planning for multicasting of adaptively-encoded
layered MPEG4 over a QoS-aware IP network. We re-quire a QoS-aware IP network in this case to
(1) Support priority dropping of packets in time of congestion. (2) Provide congestion notification to
the multicast sender. For the first requirement, we use RED's extension for service differentiation. It
recognizes the priority of packets when they need to be dropped and drops lower priority packets
first. We couple RED with our proposal for the second requirement which is the adoption of
Backward Explicit Congestion Notification (BECN) for use with IP multicast. BECN will provide
early congestion notification at the IP layer level to the video sender. BECN detects upcoming
congestion based on size of the RED queue in the routers. The MPEG4 adaptive-encoder can change
the sending rate and also can divide the video packets into lower priority packets and high priority
packets. Based on BECN messages from the routers, a simple flow controller at the sender sets the
rate for the adaptive MPEG4 encoder and also sets the ratio between the high priority and low
priority packets within the video stream. We use a TES model for generating the MPEG4 traffic that
is based on real video traces. Simulation results show that combining priority dropping, MPEG4
adaptive encoding, and multicast BECN: (1) Improves bandwidth utilization (2) Reduces time to
react to congestion and hence improves the received video quality (3) Maintains graceful degradation
in quality with congestion and provides minimum quality even if congestion persists.
Hardware implementation of (63, 51) bch encoder and decoder for wban using lf...ijitjournal
Error Correcting Codes are required to have a reliable communication through a medium that has an
unacceptable bit error rate and low signal to noise ratio. In IEEE 802.15.6 2.4GHz Wireless Body Area
Network (WBAN), data gets corrupted during the transmission and reception due to noises and
interferences. Ultra low power operation is crucial to prolong the life of implantable devices. Hence simple
block codes like BCH (63, 51, 2) can be employed in the transceiver design of 802.15.6 Narrowband PHY.
In this paper, implementation of BCH (63, 51, t = 2) Encoder and Decoder using VHDL is discussed. The
incoming 51 bits are encoded into 63 bit code word using (63, 51) BCH encoder. It can detect and correct
up to 2 random errors. The design of an encoder is implemented using Linear Feed Back Shift Register
(LFSR) for polynomial division and the decoder design is based on syndrome calculator, inversion-less
Berlekamp-Massey algorithm (BMA) and Chien search algorithm. Synthesis and simulation were carried
out using Xilinx ISE 14.2 and ModelSim 10.1c. The codes are implemented over Virtex 4 FPGA device and
tested on DN8000K10PCIE Logic Emulation Board. To the best of our knowledge, it is the first time an
implementation of (63, 51) BCH encoder and decoder carried out.
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
This document summarizes a research paper that investigates the effects of different group of pictures (GOP) sizes on the quality of reconstructed multiview video content transmitted over error prone channels. It finds that while larger GOP sizes allow for greater compression, they can also propagate errors spatially and temporally. It proposes a multi-layer data partitioning technique to make the multiview video bitstream more robust to errors, and implements frame copy error concealment in the decoder to replace lost information. Simulation results show the multi-layer approach performs better than standard H.264 data partitioning at higher error rates.
This document summarizes forward error correction techniques. It discusses how FEC works by adding redundant data to transmitted messages to allow errors to be detected and corrected without retransmission. It then describes various types of FEC coding including block coding, convolutional coding, turbo codes, and low-density parity check coding. It also discusses how techniques like concatenating codes and interleaving can be used to further reduce errors.
A new channel coding technique to approach the channel capacityijwmn
After Shannon’s 1948 channel coding theorem, we have witnessed many channel coding techniques developed to achieve the Shannon limit. A wide range of channel codes is available with different complexity levels and error correction performance. Many powerful coding schemes have been deployed in the power-limited Additive White Gaussian Noise (AWGN) channel. However, it seems like we have arrived at an end of advancement path, for most of the existing channel codes. This article introduces a new coding technique that can either be used as the last coding stage of concatenated coding scheme or in parallel configuration with other powerful channel codes to achieve reliable error performance with moderately complex decoding. We will go through an example to understand the overall approach of the proposed coding technique, and finally we will look at some simulation results over an AWGN channel to demonstrate its potential.
1) The document describes a modification to the Huffman coding used in JPEG image compression. It proposes pairing each non-zero DCT coefficient with the run-length of subsequent (rather than preceding) zero coefficients.
2) This allows using separate optimized Huffman code tables for each DCT coefficient position, improving compression by 10-15% over standard JPEG coding.
3) The decoding procedure is not changed and no end-of-block marker is needed, providing advantages with no increase in complexity.
FEC-Forward Error Correction for Optics Professionals..www.mapyourtech.comMapYourTech
Forward error correction (FEC) adds redundancy to transmitted data to allow the detection and correction of errors without retransmission. FEC works by encoding data at the transmitter and decoding it at the receiver. It allows reliable data transmission over noisy communication channels and improves performance metrics like bit error rate. Common FEC codes include Reed-Solomon codes, which offer good error correction ability and are widely used in optical communication systems to improve transmission distance and efficiency.
The document summarizes a project report on comparing the MPEG-2 and H.264 video coding standards, with a focus on their main profiles. It finds that while MPEG-2 is widely used in digital broadcasting and DVD applications, H.264 provides better compression performance. However, MPEG-2 and H.264 are incompatible, but this can be addressed through transcoding. The report discusses the MPEG-2 and H.264 standards in detail and compares their encoding schemes, profiles and levels before analyzing different transcoding methods.
This document contains summaries of 8 units that cover topics in packet networks, TCP/IP, ATM networks, network management, security, QoS, resource allocation, VPNs, tunneling, overlay networks, compression of digital voice and video, VoIP, multimedia networking, mobile ad-hoc networks, and wireless sensor networks. It includes questions related to these topics and requests explanations of concepts like datagram and virtual circuits, packet switching delays, shortest path algorithms, TCP/IP headers, ATM cell structure, network management functions, encryption algorithms, and ad-hoc and sensor network routing protocols.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses the implementation of a multi-channel HDLC transceiver using an FPGA. It describes the HDLC protocol and frame structure. The transceiver is used to transmit and receive HDLC frames. At the receiver, a single bit error can be detected using Hamming code and corrected. Implementing the multi-channel HDLC transceiver in FPGA provides flexibility, upgradability and customization.
The document contains a chapter with multiple choice, true/false, completion, and short answer questions about TCP/IP concepts including protocols, layers, addressing, and network models. It tests knowledge of the TCP/IP protocol suite including protocols like TCP, UDP, IP, ICMP, ARP, and RARP. It also covers TCP/IP layers and models like the OSI reference model and Cisco three-layer hierarchical model.
Forward Bit Error Correction - Wireless Communications Surya Chandra
As a part of my final project for wireless communications, I created a Matlab simulation for correcting the corrupted data bits (due to noise) at the receiver end using coding techniques, such as, redundancy check, hamming (7,4) code, interleaving and convolutional coding.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
1) The document discusses a Max Log MAP algorithm for decoding turbo codes used in LTE systems. Max Log MAP reduces complexity compared to the Log MAP algorithm while maintaining similar decoding performance.
2) Turbo codes achieve error correction close to the Shannon limit through iterative decoding between two soft-input soft-output decoders using extrinsic information.
3) The Max Log MAP algorithm approximates the logarithm calculation in turbo decoding to reduce complexity, maintaining performance close to but slightly worse than the standard Log MAP algorithm.
The Performance Evaluation of IEEE 802.16 Physical Layer in the Basis of Bit ...IJCI JOURNAL
Fixed Broadband Wireless Access is a promising technology which can offer high speed data rate from transmitting end to customer end which can offer high speed text, voice, and video data. IEEE 802.16 WirelessMAN is a standard that specifies medium access control layer and a set of PHY layer to fixed and mobile BWA in broad range of frequencies and it supports equipment manufacturers due to its robust performance in multipath environment. Consequently WiMAX forum has adopted this version to develop the network world wide. In this paper the performance of IEEE 802.16 OFDM PHY Layer has been investigated by using the simulation model in Matlab. The Stanford University Interim (SUI) channel models are selected for the performance evaluation of this standard. The Ideal Channel estimation is considered in this work and the performance evaluation is observed in the basis of BER.
This document contains a summary of topics covered by instructor Ho Vu Anh Tuan including OSI, network basics, IP addressing, routing, and wireless networking. Key points from each section are discussed such as the layers of the OSI model, components of a basic network including hubs, switches and routers, private and public IP addressing, routing protocols like RIP v1 and v2, wireless standards 802.11a/b/g, and wireless security methods like WEP, WPA, and WPA2. Practice questions are also provided to test knowledge of each subject area.
Complexity Analysis in Scalable Video CodingWaqas Tariq
The scalable video coding is the extension of H.264/AVC. The features in scalable video coding, are the standard features in H.264/AVC and some features which is supporting the scalability of the encoder. Those features bring some more complexity in SVC encoder. In this paper, the evaluation of scalability and encoding time (complexity) has been performed. The encoder shows the scalability of the system and quality of the optimized scheme is acceptable.
The document contains 25 questions about networking and telecommunications topics. The questions cover layers of the OSI model, encoding methods, technologies like ATM and SONET, protocols like IP and TCP, and other concepts such as VPNs and IPv6 addressing. The questions are multiple choice with one correct answer per question.
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
An efficient transcoding algorithm for G.723.1 and G.729A ...Videoguy
This document proposes an efficient transcoding algorithm to translate between the G.723.1 and G.729A speech coding standards. The algorithm has four main steps: 1) converting line spectral pair parameters between the two codecs, 2) converting pitch intervals using a smoothing technique, 3) performing a fast adaptive codebook search, and 4) performing a fast fixed codebook search. Objective and subjective tests show the algorithm achieves comparable speech quality to tandem encoding/decoding with 26-38% lower complexity and shorter delay.
Jayachandra Vudumula is seeking a position that allows growth and mutual benefits. He has 3 years of experience as an RTL design engineer developing Ethernet MAC, L2 protocols, SRAM, SPI, UART and interfacing. He is proficient in VHDL, Verilog, C and tools like Quartus. Some of the IP blocks he developed include GFP, Ethernet MAC with DMA, E1 framer, and HDLC. He has expertise in FPGA design flow, protocols, testing tools and Linux. He holds an MSc in Electronics from Sri Krishna Devaraya University with 79%.
This document analyzes the RC4 encryption algorithm and examines how its performance is affected by changing parameters like encryption key length and file size. Experimental tests were conducted to measure encryption time for different key lengths and file types. The results show encryption time increases with longer keys and larger files, and are modeled mathematically. The document also provides background on encryption methods, how RC4 works, and compares stream and block ciphers.
This section provides background on VoIP applications and their components. It discusses how VoIP works by transmitting voice packets over IP networks, unlike traditional PSTN which uses circuit switching. The two main VoIP protocols are H.323 and SIP, which provide call control and setup. UDP is typically used instead of TCP for transmitting VoIP due to its lower overhead and lack of error checking, which is tolerable for voice transmission. IPv4 and IPv6 can both be used to transmit VoIP, with IPv6 having a larger header size.
The impact of jitter on the HEVC video streaming with Multiple CodingHakimSahour
This document discusses the impact of jitter on video quality when streaming HEVC encoded video over wireless networks. It presents a study evaluating the effects of quantization parameter (QP) values, video content, and jitter on quality of experience (QoE). The study finds that using higher QP values, which lowers bitrate and increases compression, degrades video quality as measured by PSNR. It also finds that different video content results in varying PSNR values for the same encoding settings. Additionally, the results show that adjusting the QP value can help recover from the negative effects of jitter on received video quality. The document proposes using multiple description coding (MDC) to further improve transmission over error-prone wireless channels.
The security of multimedia transmission is very important in today’s life. It is a challenge to transfer the
huge multimedia data mostly because of big file sizes and limited bit rates, which are still at a premium.
Hence, the data to be sent has to be compressed. H.264/AVC is the best and practical solution available
today to achieve this. If the data is made secure by good encryption after compression without changing
the size of the compressed file, it is even better. This paper makes an attempt to achieve exactly this. If the
data is encrypted after the compression, the software used for compression need not be changed. In this
technique, use of selective encryption is applied without altering the compression software. Here, the data
is first identified for I-frame which is selected for encryption. This selected data is then sliced and each
slice data without slice header is encrypted using AES algorithm. After decoding the file, the frame can be
seen to be distorted compared to the original one. When the video is decrypted with proper key, the video
obtained which is the same as decoded without encryption.
A new channel coding technique to approach the channel capacityijwmn
After Shannon’s 1948 channel coding theorem, we have witnessed many channel coding techniques developed to achieve the Shannon limit. A wide range of channel codes is available with different complexity levels and error correction performance. Many powerful coding schemes have been deployed in the power-limited Additive White Gaussian Noise (AWGN) channel. However, it seems like we have arrived at an end of advancement path, for most of the existing channel codes. This article introduces a new coding technique that can either be used as the last coding stage of concatenated coding scheme or in parallel configuration with other powerful channel codes to achieve reliable error performance with moderately complex decoding. We will go through an example to understand the overall approach of the proposed coding technique, and finally we will look at some simulation results over an AWGN channel to demonstrate its potential.
1) The document describes a modification to the Huffman coding used in JPEG image compression. It proposes pairing each non-zero DCT coefficient with the run-length of subsequent (rather than preceding) zero coefficients.
2) This allows using separate optimized Huffman code tables for each DCT coefficient position, improving compression by 10-15% over standard JPEG coding.
3) The decoding procedure is not changed and no end-of-block marker is needed, providing advantages with no increase in complexity.
FEC-Forward Error Correction for Optics Professionals..www.mapyourtech.comMapYourTech
Forward error correction (FEC) adds redundancy to transmitted data to allow the detection and correction of errors without retransmission. FEC works by encoding data at the transmitter and decoding it at the receiver. It allows reliable data transmission over noisy communication channels and improves performance metrics like bit error rate. Common FEC codes include Reed-Solomon codes, which offer good error correction ability and are widely used in optical communication systems to improve transmission distance and efficiency.
The document summarizes a project report on comparing the MPEG-2 and H.264 video coding standards, with a focus on their main profiles. It finds that while MPEG-2 is widely used in digital broadcasting and DVD applications, H.264 provides better compression performance. However, MPEG-2 and H.264 are incompatible, but this can be addressed through transcoding. The report discusses the MPEG-2 and H.264 standards in detail and compares their encoding schemes, profiles and levels before analyzing different transcoding methods.
This document contains summaries of 8 units that cover topics in packet networks, TCP/IP, ATM networks, network management, security, QoS, resource allocation, VPNs, tunneling, overlay networks, compression of digital voice and video, VoIP, multimedia networking, mobile ad-hoc networks, and wireless sensor networks. It includes questions related to these topics and requests explanations of concepts like datagram and virtual circuits, packet switching delays, shortest path algorithms, TCP/IP headers, ATM cell structure, network management functions, encryption algorithms, and ad-hoc and sensor network routing protocols.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses the implementation of a multi-channel HDLC transceiver using an FPGA. It describes the HDLC protocol and frame structure. The transceiver is used to transmit and receive HDLC frames. At the receiver, a single bit error can be detected using Hamming code and corrected. Implementing the multi-channel HDLC transceiver in FPGA provides flexibility, upgradability and customization.
The document contains a chapter with multiple choice, true/false, completion, and short answer questions about TCP/IP concepts including protocols, layers, addressing, and network models. It tests knowledge of the TCP/IP protocol suite including protocols like TCP, UDP, IP, ICMP, ARP, and RARP. It also covers TCP/IP layers and models like the OSI reference model and Cisco three-layer hierarchical model.
Forward Bit Error Correction - Wireless Communications Surya Chandra
As a part of my final project for wireless communications, I created a Matlab simulation for correcting the corrupted data bits (due to noise) at the receiver end using coding techniques, such as, redundancy check, hamming (7,4) code, interleaving and convolutional coding.
Simulation of Turbo Convolutional Codes for Deep Space MissionIJERA Editor
In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
1) The document discusses a Max Log MAP algorithm for decoding turbo codes used in LTE systems. Max Log MAP reduces complexity compared to the Log MAP algorithm while maintaining similar decoding performance.
2) Turbo codes achieve error correction close to the Shannon limit through iterative decoding between two soft-input soft-output decoders using extrinsic information.
3) The Max Log MAP algorithm approximates the logarithm calculation in turbo decoding to reduce complexity, maintaining performance close to but slightly worse than the standard Log MAP algorithm.
The Performance Evaluation of IEEE 802.16 Physical Layer in the Basis of Bit ...IJCI JOURNAL
Fixed Broadband Wireless Access is a promising technology which can offer high speed data rate from transmitting end to customer end which can offer high speed text, voice, and video data. IEEE 802.16 WirelessMAN is a standard that specifies medium access control layer and a set of PHY layer to fixed and mobile BWA in broad range of frequencies and it supports equipment manufacturers due to its robust performance in multipath environment. Consequently WiMAX forum has adopted this version to develop the network world wide. In this paper the performance of IEEE 802.16 OFDM PHY Layer has been investigated by using the simulation model in Matlab. The Stanford University Interim (SUI) channel models are selected for the performance evaluation of this standard. The Ideal Channel estimation is considered in this work and the performance evaluation is observed in the basis of BER.
This document contains a summary of topics covered by instructor Ho Vu Anh Tuan including OSI, network basics, IP addressing, routing, and wireless networking. Key points from each section are discussed such as the layers of the OSI model, components of a basic network including hubs, switches and routers, private and public IP addressing, routing protocols like RIP v1 and v2, wireless standards 802.11a/b/g, and wireless security methods like WEP, WPA, and WPA2. Practice questions are also provided to test knowledge of each subject area.
Complexity Analysis in Scalable Video CodingWaqas Tariq
The scalable video coding is the extension of H.264/AVC. The features in scalable video coding, are the standard features in H.264/AVC and some features which is supporting the scalability of the encoder. Those features bring some more complexity in SVC encoder. In this paper, the evaluation of scalability and encoding time (complexity) has been performed. The encoder shows the scalability of the system and quality of the optimized scheme is acceptable.
The document contains 25 questions about networking and telecommunications topics. The questions cover layers of the OSI model, encoding methods, technologies like ATM and SONET, protocols like IP and TCP, and other concepts such as VPNs and IPv6 addressing. The questions are multiple choice with one correct answer per question.
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
An efficient transcoding algorithm for G.723.1 and G.729A ...Videoguy
This document proposes an efficient transcoding algorithm to translate between the G.723.1 and G.729A speech coding standards. The algorithm has four main steps: 1) converting line spectral pair parameters between the two codecs, 2) converting pitch intervals using a smoothing technique, 3) performing a fast adaptive codebook search, and 4) performing a fast fixed codebook search. Objective and subjective tests show the algorithm achieves comparable speech quality to tandem encoding/decoding with 26-38% lower complexity and shorter delay.
Jayachandra Vudumula is seeking a position that allows growth and mutual benefits. He has 3 years of experience as an RTL design engineer developing Ethernet MAC, L2 protocols, SRAM, SPI, UART and interfacing. He is proficient in VHDL, Verilog, C and tools like Quartus. Some of the IP blocks he developed include GFP, Ethernet MAC with DMA, E1 framer, and HDLC. He has expertise in FPGA design flow, protocols, testing tools and Linux. He holds an MSc in Electronics from Sri Krishna Devaraya University with 79%.
This document analyzes the RC4 encryption algorithm and examines how its performance is affected by changing parameters like encryption key length and file size. Experimental tests were conducted to measure encryption time for different key lengths and file types. The results show encryption time increases with longer keys and larger files, and are modeled mathematically. The document also provides background on encryption methods, how RC4 works, and compares stream and block ciphers.
This section provides background on VoIP applications and their components. It discusses how VoIP works by transmitting voice packets over IP networks, unlike traditional PSTN which uses circuit switching. The two main VoIP protocols are H.323 and SIP, which provide call control and setup. UDP is typically used instead of TCP for transmitting VoIP due to its lower overhead and lack of error checking, which is tolerable for voice transmission. IPv4 and IPv6 can both be used to transmit VoIP, with IPv6 having a larger header size.
The impact of jitter on the HEVC video streaming with Multiple CodingHakimSahour
This document discusses the impact of jitter on video quality when streaming HEVC encoded video over wireless networks. It presents a study evaluating the effects of quantization parameter (QP) values, video content, and jitter on quality of experience (QoE). The study finds that using higher QP values, which lowers bitrate and increases compression, degrades video quality as measured by PSNR. It also finds that different video content results in varying PSNR values for the same encoding settings. Additionally, the results show that adjusting the QP value can help recover from the negative effects of jitter on received video quality. The document proposes using multiple description coding (MDC) to further improve transmission over error-prone wireless channels.
The security of multimedia transmission is very important in today’s life. It is a challenge to transfer the
huge multimedia data mostly because of big file sizes and limited bit rates, which are still at a premium.
Hence, the data to be sent has to be compressed. H.264/AVC is the best and practical solution available
today to achieve this. If the data is made secure by good encryption after compression without changing
the size of the compressed file, it is even better. This paper makes an attempt to achieve exactly this. If the
data is encrypted after the compression, the software used for compression need not be changed. In this
technique, use of selective encryption is applied without altering the compression software. Here, the data
is first identified for I-frame which is selected for encryption. This selected data is then sliced and each
slice data without slice header is encrypted using AES algorithm. After decoding the file, the frame can be
seen to be distorted compared to the original one. When the video is decrypted with proper key, the video
obtained which is the same as decoded without encryption.
Motion Vector Recovery for Real-time H.264 Video StreamsIDES Editor
Among the various network protocols that can be
used to stream the video data, RTP over UDP is the best to do
with real time streaming in H.264 based video streams. Videos
transmitted over a communication channel are highly prone
to errors; it can become critical when UDP is used. In such
cases real time error concealment becomes an important
aspect. A subclass of the error concealment is the motion
vector recovery which is used to conceal errors at the decoder
side. Lagrange Interpolation is the fastest and a popular
technique for the motion vector recovery. This paper proposes
a new system architecture which enables the RTP-UDP based
real time video streaming as well as the Lagrange
interpolation based real time motion vector recovery in H.264
coded video streams. A completely open source H.264 video
codec called FFmpeg is chosen to implement the proposed
system. Proposed implementation was tested against the
different standard benchmark video sequences and the
quality of the recovered videos was measured at the decoder
side using various quality measurement metrics.
Experimental results show that the real time motion vector
recovery does not introduce any noticeable difference or
latency during display of the recovered video.
ERROR RESILIENT FOR MULTIVIEW VIDEO TRANSMISSIONS WITH GOP ANALYSIS ijma
The work in this paper examines the effects of group of pictures on H.264 multiview video coding bitstream
over an erroneous network with different error rates. The study considers analyzing the bitrate
performance for different GOP and error rates to see the effects on the quality of the reconstructed
multiview video. However, by analyzing the multiview video content it is possible to identify an optimum
GOP size depending on the type of application used. In a comparison test, the H.264 data partitioning and
the multi-layer data partitioning technique with different error rates and GOP are evaluated in terms of
quality perception. The results of the simulation confirm that Multi-layer data partitioning technique shows
a better performance at higher error rates with different GOP. Further experiments in this work have
shown the effects of GOP in terms of visual quality and bitrate for different multiview video sequences
Error resilient for multiview video transmissions with gop analysisijma
The work in this paper examines the effects of group of pictures on H.264 multiview video coding bitstream
over an erroneous network with different error rates. The study considers analyzing the bitrate
performance for different GOP and error rates to see the effects on the quality of the reconstructed
multiview video. However, by analyzing the multiview video content it is possible to identify an optimum
GOP size depending on the type of application used. In a comparison test, the H.264 data partitioning and
the multi-layer data partitioning technique with different error rates and GOP are evaluated in terms of
quality perception. The results of the simulation confirm that Multi-layer data partitioning technique shows
a better performance at higher error rates with different GOP. Further experiments in this work have
shown the effects of GOP in terms of visual quality and bitrate for different multiview video sequences.
Spatial Scalable Video Compression Using H.264IOSR Journals
H.264 is a video compression standard that provides improved compression performance over prior standards like H.261 and H.263. It achieves spatial scalability by encoding video in a spatial manner that reduces the number of frames and file size. The paper simulates H.264 encoding and decoding of a QCIF video using JM software. It compares parameters like PSNR, CSNR, and MSE between the encoded and decoded video. H.264 provides 31-35% greater efficiency and lower bit rates compared to prior standards.
This document summarizes spatial scalable video compression using H.264. It discusses previous video compression standards like H.261 and H.263. It then describes the key components of the H.264 encoder and decoder, including prediction models, spatial models and entropy encoding. Simulation results comparing parameters like PSNR, CSNR and MSE between encoded and decoded video using H.264 are presented. The paper concludes that H.264 provides 31-35% improved efficiency and bit rate reduction over previous standards.
This document provides an overview and comparison of the H.265/HEVC and H.264/AVC video coding standards. It summarizes the key features and techniques of each, such as HEVC achieving around 40% higher data compression compared to H.264/AVC through improvements to prediction, transform coding, and entropy encoding. Experimental results testing various video sequences show HEVC provides significantly better compression efficiency. The document also reviews the technical details and implementations of both standards.
A REAL-TIME H.264/AVC ENCODER&DECODER WITH VERTICAL MODE FOR INTRA FRAME AND ...csandit
The video coding standards are being developed to satisfy the requirements of applications for
various purposes, better picture quality, higher coding efficiency, and more error robustness.
The new international video coding standard H.264 /AVC aims at having significant
improvements in coding efficiency, and error robustness in comparison with the previous
standards such as MPEG-2, H261, H263,and H264. Video stream needs to be processed from
several steps in order to encode and decode the video such that it is compressed efficiently with
available limited resources of hardware and software. All advantages and disadvantages of
available algorithms should be known to implement a codec to accomplish final requirement.
The purpose of this project is to implement all basic building blocks of H.264 video encoder and
decoder. The significance of the project is the inclusion of all components required to encode
and decode a video in MatLab .
This document discusses video quality analysis for H.264 based on the human visual system. It proposes an improved video quality assessment method that adds color comparison to structural similarity measurement. The method separates similarity measurement into four comparisons: luminance, contrast, structure, and color. Experimental results on video sets with two distortion types show the proposed method's quality scores are more consistent with visual quality than classical methods. It also discusses the H.264 video coding standard and provides examples of encoding and decoding experimental results.
This document discusses the interaction between application layer multicast (ALM) trees and MPEG-4 video streaming. It examines how different coding parameters and tree structures affect end-user video quality as measured by PSNR. The paper presents a simulation system to test various combinations of NICE ALM trees and MPEG-4 parameters. Results show that coding choices and tree organization depend on network characteristics like packet loss and bandwidth distribution. Large GOP sizes and many B-frames optimize quality with rare losses, while small GOPs and fewer B-frames work best with frequent losses. Uniform bandwidth favors small clusters and long paths, while varied bandwidth prefers larger clusters and shorter paths.
Analyzing Video Streaming Quality by Using Various Error Correction Methods o...IJERA Editor
Transmission video over ad hoc networks has become one of the most important and interesting subjects of study for researchers and programmers because of the strong relationship between video applications and frequent users of various mobile devices, such as laptops, PDAs, and mobile phones in all aspects of life. However, many challenges, such as packet loss, congestion (i.e., impairments at the network layer), multipath fading (i.e., impairments at the physical layer) [1], and link failure, exist in transferring video over ad hoc networks; these challenges negatively affect the quality of the perceived video [2].This study has investigated video transfer over ad hoc networks. The main challenges of transferring video over ad hoc networks as well as types of errors that may occur during video transmission, various types of video mechanisms, error correction methods, and different Quality of Service (QoS) parameters that affect the quality of the received video are also investigated.
The document summarizes the key features and tools of the H.264/AVC video coding standard. It describes how H.264/AVC achieves significant gains in compression efficiency of up to 50% compared to previous standards through the use of new tools like multiple reference frames, fractional pixel motion estimation, an adaptive deblocking filter, and an integer transform. It also notes that while the decoder complexity of H.264/AVC is higher than previous standards, the standard aims to provide efficient video compression for both interactive and non-interactive applications across different networks and storage media.
H2B2VS (HEVC hybrid broadcast broadband video services) – Building innovative...Raoul Monnier
Broadcast and broadband networks continue to be separate worlds in the video consumption business. Some initiatives such as HbbTV have built a bridge between both worlds, but its application is almost limited to providing links over the broadcast channel to content providers’ applications such as Catch-up TV services. When it comes to reality, the user is using either one network or the other.
H2B2VS is a Celtic-Plus project aiming at exploiting the potential of real hybrid networks by implementing efficient synchronization mechanisms and using new video coding standard such as High Efficiency Video Coding (HEVC). The goal is to develop successful hybrid network solutions that enable value added services with an optimum bandwidth usage in each network and with clear commercial applications. An example of the potential of this approach is the transmission of Ultra-HD TV by sending the main content over the broadcast channel and the required complementary information over the broadband network. This technology can also be used to improve the life of handicapped persons: Deaf people receive through the broadband network a sign language translation of a programme sent over the broadcast channel; the TV set then displays this translation in an inset window.
One of the most important contributions of the project is developing and testing synchronization methods between two different networks that offer unequal qualities of service with significant differences in delay and jitter.
In this paper, the main technological project contributions are described, including SHVC, the scalable extension of HEVC and a special focus on the synchronization solution adopted by MPEG and DVB. The paper also presents some of the implemented practical use cases, such as the sign language translation described above, and their performance results so as to evaluate the commercial application of this type of solution.
This document proposes a combined ciphering and coding scheme for image transmission over noisy wireless channels. The ciphering scheme is based on a modified chaotic Logistic map to provide security. Low Density Parity Check (LDPC) coding is used as the error correction technique to improve reliability over noisy channels. Simulation results show that LDPC coding outperforms turbo coding for this application in terms of bit error rate. The combined ciphering and coding scheme is analyzed and shown to enhance performance metrics like peak signal-to-noise ratio and bit error rate, achieving both security and reliability for image transmission over wireless channels.
QoS Constrained H.264/SVC video streaming over Multicast Ad Hoc NetworksIJERA Editor
Support for QoS enabled multimedia transmission over multicast ad hoc network is necessary these days.
Researchers have developed various encoding/decoding schemes which can efficiently deliver the multimedia
contents over wireless networks. In case of ad hoc networks, performance of routing protocol depends upon
different factors i.e. traffic type being used for wireless transmission, dynamic network behavior, bandwidth and
computational power of nodes etc. It is essential to investigate the performance of multicast routing protocol
using various data types because they may consume huge network resources thus results in degradation of
transmission quality. In case of multicast group communication, Audio/Video data stream can cause extra
overhead on network performance and it is quite difficult to maintain Quality of Services for such type of data.
H.264 offers a rich codec library for Scalable Video Coding, to transfer SVC video traffic efficiently over
wireless networks. In this paper, we will analyze the performance of MAODV and PUMA routing protocols
using H.264/SVC video streaming traffic under the various QoS constraints such as Throughput, PDR, Delay,
Routing Load and Jitter etc.
Performance Evaluation of H.264 AVC Using CABAC Entropy Coding For Compound I...DR.P.S.JAGADEESH KUMAR
The document evaluates the performance of H.264/AVC entropy coding (CABAC) for compressing different types of images. It is observed that CABAC is highly efficient at compressing compound images, achieving higher compression ratios and PSNR values compared to other image types at high bitrates. The proposed system compresses grayscale compound images using CABAC after applying Daubechies wavelet transform. Performance is measured using compression ratio and PSNR metrics at varying bits per pixel.
Quality of Service (QoS) and Quality of Experience (QoE) are considered key performance indicators of video and audio
conference performance of video call streaming service. Since Skype operates and evolves around the scope of achieving best possible
quality service across all available operative environments, it is important to investigate and examine how Skype has evolved around
this cross platform scope, what kind of encoding improvements have been implemented through years and how efficient are all these changes, objectively and subjectively.
The new 3GPP codec for Enhanced Voice Services (EVS) offers important new features and improvements for low-delay real-time communication systems. Based on a novel, switched low-delay speech/audio codec, the EVS codec contains various advancements for better compression efficiency and higher quality for clean/noisy speech, mixed content and music, including support for wideband, super-wideband and full-band content. The EVS codec operates in a broad range of bitrates, is highly robust against packet loss and provides an AMR-WB compatible mode for compatibility with existing systems.
PERFORMANCE EVALUATION OF H.265/MPEG-HEVC, VP9 AND H.264/MPEGAVC VIDEO CODINGijma
This study evaluates the performance of the three latest video codecs H.265/MPEG-HEVC, H.264/MPEGAVC
and VP9. The evaluation is based on both subjective and objective quality metrics. The assessment
metric Double Stimulus Impairment Scale (DSIS) is used to evaluate the subjective quality of the
compressed video sequences. The Peak Signal-to-Noise Ratio (PSNR) metricis used for the objective
evaluation. Moreover, this work studies the effect of frame rate and resolution on the encoders’
performance. The extensive number of experiments are conducted with similar encoding configurations for
the three studied encoders. The evaluation results show that H.265/MPEG-HEVC provides superior bitrate
saving capabilities compared to H.264 and VP9. However, VP9 shows lower encoding time than
H.265/MPEG-HEVC but higher encoding time compared to H.264.
Similar to 22 29 may17 27feb 14758 deevya indoonundon(edit) (20)
This document describes an electronic doorbell system that uses a keypad and GSM for home security. The system consists of a doorbell connected to a microcontroller that triggers a GSM module to send an SMS to the homeowner when the doorbell is pressed. The homeowner can then respond via button press to open the door, or a message will be displayed if they do not respond. An authorized person can enter a password on the keypad, and if multiple wrong passwords are entered, a message will be sent to the homeowner about a potential burglary attempt. The system aims to provide notification to homeowners and prevent unauthorized access through use of passwords and messaging capabilities.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus, it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus, the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
Graphs have become the dominant life-form of many tasks as they advance a
structure to represent many tasks and the corresponding relations. A powerful
role of networks/graphs is to bridge local feats that exist in vertices as they
blossom into patterns that help explain how nodal relations and their edges
impacts a complex effect that ripple via a graph. User cluster are formed as a
result of interactions between entities. Many users can hardly categorize their
contact into groups today such as “family”, “friends”, “colleagues” etc. Thus,
the need to analyze such user social graph via implicit clusters, enables the
dynamism in contact management. Study seeks to implement this dynamism
via a comparative study of deep neural network and friend suggest algorithm.
We analyze a user’s implicit social graph and seek to automatically create
custom contact groups using metrics that classify such contacts based on a
user’s affinity to contacts. Experimental results demonstrate the importance
of both the implicit group relationships and the interaction-based affinity in
suggesting friends.
This paper projects Gryllidae Optimization Algorithm (GOA) has been applied to solve optimal reactive power problem. Proposed GOA approach is based on the chirping characteristics of Gryllidae. In common, male Gryllidae chirp, on the other hand some female Gryllidae also do as well. Male Gryllidae draw the females by this sound which they produce. Moreover, they caution the other Gryllidae against dangers with this sound. The hearing organs of the Gryllidae are housed in an expansion of their forelegs. Through this, they bias to the produced fluttering sounds. Proposed Gryllidae Optimization Algorithm (GOA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show that the projected algorithms reduced the real power loss considerably.
In the wake of the sudden replacement of wood and kerosene by gas cookers for several purposes in Nigeria, gas leakage has caused several damages in our homes, Laboratories among others. installation of a gas leakage detection device was globally inspired to eliminate accidents related to gas leakage. We present an alternative approach to developing a device that can automatically detect and control gas leakages and also monitor temperature. The system detects the leakage of the LPG (Liquefied Petroleum Gas) using a gas sensor, then triggred the control system response which employs ventilator system, Mobile phone alert and alarm when the LPG concentration in the air exceeds a certain level. The performance of two gas sensors (MQ5 and MQ6) were tested for a guided decision. Also, when the temperature of the environment poses a danger, LED (indicator), buzzer and LCD (16x2) display was used to indicate temperature and gas leakage status in degree Celsius and PPM respectively. Attension was given to the response time of the control system, which was ascertained that this system significantly increases the chances and efficiency of eliminating gas leakage related accident.
Feature selection problem is one of the main important problems in the text and data mining domain. This paper presents a comparative study of feature selection methods for Arabic text classification. Five of the feature selection methods were selected: ICHI square, CHI square, Information Gain, Mutual Information and Wrapper. It was tested with five classification algorithms: Bayes Net, Naive Bayes, Random Forest, Decision Tree and Artificial Neural Networks. In addition, Data Collection was used in Arabic consisting of 9055 documents, which were compared by four criteria: Precision, Recall, F-measure and Time to build model. The results showed that the improved ICHI feature selection got almost all the best results in comparison with other methods.
The document proposes the Gentoo Penguin Algorithm (GPA) to solve the optimal reactive power problem. The goal is to minimize real power loss. GPA is inspired by the natural behaviors of Gentoo penguins. In GPA, penguin positions represent potential solutions. Penguins move toward other penguins with lower "cost" or higher heat concentration, representing better solutions. Cost is defined by heat concentration and distance between penguins. Heat radiation decreases with distance. The algorithm is tested on the IEEE 57 bus system and reduces real power loss effectively compared to other methods.
08 20272 academic insight on applicationIAESIJEECS
This research has thrown up many questions in need of further investigation.There was an expressive quantitative-qualitative research, which a common investigation form was used in.The dialogue item was also applied to discover if the contributors asserted the media-based attitude supplements their learning of academic English writing classes or not.Data recounted academic” insights toward using Skype as a sustaining implement for lessons releasing based on chosen variables: their occupation, year of education, and knowledge with Skype discovered that there were no important statistical differences in the use of Skype units because of medical academics major knowledge. There are statistically important differences in using Skype units. The findings also, disclosed that there are statistically significant differences in using Skype units due to the practice with Skype variable, in favors of academics with no Skype use practice. Skype instrument as an instructive media is a positive medium to be employed to supply academic medical writing data and assist education. Academics who do not have enough time to contribute in classes believe comfortable using the Skype-based attitude in scientific writing. They who took part in the course claimed that their approval of this media is due to learning academic innovative medical writing.
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
Notary is an official authorized to make an authentic deed regarding all deeds, agreements and stipulations required by a general rule. Activities carried out at the notary office such as recording client data and file data still use traditional systems that tend to be manual. The problem that occurs is the inefficiency in data processing and providing information to clients. Clients have difficulty getting information related to the progress of documents that are being taken care of at the notary's office. The client must take the time to arrive to the notary's office repeatedly to check the progress of the work of the document file. The purpose of this study is to facilitate clients in obtaining information about the progress of the work in progress, and make it easier for employees to process incoming documents by implementing an administrative system. This system was developed with the waterfall system development method and uses the Multi-Channel Access Technology integrated in the website to simplify the process of delivering information and requesting information from clients and to clients with Telegram and SMS Gateway. Clients will come to the office only when there is a notification from the system via Telegram or SMS notifying that the client must come directly to the notary's office, thus leading to an efficient time and avoiding excessive transportation costs. The overall functional system can function properly based on the results of alpha testing. The results of beta testing conducted by distributing the system feasibility test questionnaire to end users, get a percentage of 96% of users agree the system is feasible to be implemented.
In this work Tundra wolf algorithm (TWA) is proposed to solve the optimal reactive power problem. In the projected Tundra wolf algorithm (TWA) in order to avoid the searching agents from trapping into the local optimal the converging towards global optimal is divided based on two different conditions. In the proposed Tundra wolf algorithm (TWA) omega tundra wolf has been taken as searching agent as an alternative of indebted to pursue the first three most excellent candidates. Escalating the searching agents’ numbers will perk up the exploration capability of the Tundra wolf wolves in an extensive range. Proposed Tundra wolf algorithm (TWA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show the proposed algorithm reduced the real power loss effectively.
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
Artificial Neural Networks have proved their efficiency in a large number of research domains. In this paper, we have applied Artificial Neural Networks on Arabic text to prove correct language modeling, text generation, and missing text prediction. In one hand, we have adapted Recurrent Neural Networks architectures to model Arabic language in order to generate correct Arabic sequences. In the other hand, Convolutional Neural Networks have been parameterized, basing on some specific features of Arabic, to predict missing text in Arabic documents. We have demonstrated the power of our adapted models in generating and predicting correct Arabic text comparing to the standard model. The model had been trained and tested on known free Arabic datasets. Results have been promising with sufficient accuracy.
In the present-day communications speech signals get contaminated due to
various sorts of noises that degrade the speech quality and adversely impacts
speech recognition performance. To overcome these issues, a novel approach
for speech enhancement using Modified Wiener filtering is developed and
power spectrum computation is applied for degraded signal to obtain the
noise characteristics from a noisy spectrum. In next phase, MMSE technique
is applied where Gaussian distribution of each signal i.e. original and noisy
signal is analyzed. The Gaussian distribution provides spectrum estimation
and spectral coefficient parameters which can be used for probabilistic model
formulation. Moreover, a-priori-SNR computation is also incorporated for
coefficient updation and noise presence estimation which operates similar to
the conventional VAD. However, conventional VAD scheme is based on the
hard threshold which is not capable to derive satisfactory performance and a
soft-decision based threshold is developed for improving the performance of
speech enhancement. An extensive simulation study is carried out using
MATLAB simulation tool on NOIZEUS speech database and a comparative
study is presented where proposed approach is proved better in comparison
with existing technique.
Previous research work has highlighted that neuro-signals of Alzheimer’s disease patients are least complex and have low synchronization as compared to that of healthy and normal subjects. The changes in EEG signals of Alzheimer’s subjects start at early stage but are not clinically observed and detected. To detect these abnormalities, three synchrony measures and wavelet-based features have been computed and studied on experimental database. After computing these synchrony measures and wavelet features, it is observed that Phase Synchrony and Coherence based features are able to distinguish between Alzheimer’s disease patients and healthy subjects. Support Vector Machine classifier is used for classification giving 94% accuracy on experimental database used. Combining, these synchrony features and other such relevant features can yield a reliable system for diagnosing the Alzheimer’s disease.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
The cognitive radio prototype performance is to alleviate the scarcity of spectral resources for wireless communication through intelligent sensing and quick resource allocation techniques. Secondary users (SU’s) actively obtain the spectrum access opportunity by supporting primary users (PU’s) in cognitive radio networks (CRNs). In present generation, spectrum access is endowed through cooperative communication-based link-level frame-based cooperative (LLC) principle. In this SUs independently act as conveyors for PUs to achieve spectrum access opportunities. Unfortunately, this LLC approach cannot fully exploit spectrum access opportunities to enhance the throughput of CRNs and fails to motivate PUs to join the spectrum sharing processes. Therefore, to overcome this con, network level cooperative (NLC) principle was used, where SUs are integrated mutually to collaborate with PUs session by session, instead of frame based cooperation for spectrum access opportunities. NLC approach has justified the challenges facing in LLC approach. In this paper we make a survey of some models that have been proposed to tackle the problem of LLC. We show the relevant aspects of each model, in order to characterize the parameters that we should take in account to achieve a spectrum access opportunity.
In this paper, the author provides insights and lessons that can be learned from colleagues at American universities about their online education experiences. The literature review and previous studies of online educations gains are explored and summarized in this research. Emerging trends in online education are discussed in detail, and strategies to implement these trends are explained. The author provides several tools and strategies that enable universities to ensure the quality of online education. At the end of this research paper, the researcher provides examples from Arab universities who have successfully implemented online education and expanded their impact on the society. This research provides a strategy and a model that can be used by universities in the Middle East as a roadmap to implement online education in their regions.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
2. ISSN: 2502-4752
IJEECS Vol. 6, No. 3, June 2017 : 671 – 681
672
produced higher Quality of Experience. In addition in [11], the authors proposed a novel point-
to-point H.264 Video Streaming over IEEE 802.15.4 together with RS Error Correction. The
results have demonstrated that real-time video streaming is achieved with more reliability [11].
The application of UEP with error correcting codes has proved to be a very powerful
method of combatting the problem of packet losses over erroneous communication channels.
For example in [12], a new trapezoidal UEP scheme was proposed which has shown to achieve
higher robustness while reducing the coding redundancy as compared to a traditional UEP
scheme. Moreover, the authors in [13], used the H.264 error resilient tool of data partitioning to
apply UEP to different data partitions. Thesimulation results have shown that the proposed
scheme in [13] demonstrated efficient adaptation to the network conditions and graceful
reconstruction quality degradation against a large range of packet loss rate and available
bandwidth. Along the same line, in [14], Expanding Window Random Linear Codes which are
an UEP fountain coding scheme was proposed for H.264 slice-partitioned data and results have
shown the effectiveness of using the proposed scheme for multimedia broadcast applications.
In this paper a novel concealment aware UEP scheme for H.264 using RS Codes is
proposed. The proposed UEP technique assigns a code rate to each Macroblock (MB) based
on two factors. The first factoris the type of concealment, spatial or temporal, which will be used
in case of loss during transmission. The second factor is the Concealment Dependent Index
(CDI) which determines the effectiveness of the concealment algorithm based on the location of
the MB in a frame. In addition, two types of interleaving techniques referred as Frame Level
Interleaving (FLI) and Group Level Interleaving (GLI) are also used with UEP to further improve
the error correction capability of the RS decoder. Another concept referred as, prioritised
concealment [15] is also applied in cases where error correction is beyond the capability of the
RS decoder. Simulation results have demonstrated that the proposed framework provides an
average gain of 2.96 dB over EEP. Moreover, with the use of prioritised concealment an
additional gain of 0.96 dB is achieved. Results have also shown that FLI outperforms GLI by
4.17 dB at a packet loss rate of 0.2. However, at a packet loss rate of 0.4, GLI has shown to
outperform FLI by 5dB.
The remainder of this paper is organized as follows. Section 2 presents the research
method of the proposed framework. Section 3 demonstrates the experimental results and finally
Section 4 draws some conclusions and scope for future works.
2. Research Method
In Figure 1 the encoder of the proposed framework is shown. The input to the encoder
consists of the video sequence containing N frames. Each frame, which is made up of 99 MBs,
is processed by the H.264 encoder to create 99 arrays of bits each representing a MB. Each
sequence of bits of length, Ls is then converted into k symbols by the RS encoder using a bit
rate, Rb, such that:
(1)
Where:
Ls represents the length of a bitstream.krepresents the total number of symbols obtained.Rbis
thenumber of bits per symbol.
After the transformation from bits to symbols, each symbol array is converted into a
Galois field array before the code rate allocation is performed. In this work, a UEP scheme is
proposed with RS codes such that the most important MBs are given higher protection. The
importance of each MB is determined based on the type of concealment, spatial or temporal,
used during H.264 encoding and the CDI which checks the location of each MB in a frame.
3. IJEECS ISSN: 2502-4752
A Concealment Aware UEP Scheme for H.264 using RS Codes (D. Indoonundon)
673
H.264
Encoder
Input Video
N Frames 101000..
110111..
001001..
101011..
MB 1
MB 2
MB 3
MB 99
Convert
into
symbols
7 2 1 3..
9 1 4 8 ..
8 3 4 9..
4 7 3 1..
MB 1
MB 2
MB 3
MB 99
UEP EEP
S1
RS
Encode
S2
Frame Level
Interleaving
Group Level
Interleaving
packets
Network
1 2
1
2
Galois Field
Conversion
….
N…..2 1
Figure 1. Proposed Encoder
To select the UEP scheme switch S1 is placed in position 1. For the EEP scheme,
switch S1 is placed in position 2. The UEP scheme is further explained in section 2.1. During
RS encoding, each symbol array of length k is converted into an array of length n, consisting of,
n-k, parity symbols [7, 8]. In addition, two types of interleaving referred as Frame level
Interleaving (FLI) and Group Level Interleaving (GLI) are performed. When using FLI, switch S2
is placed into position 1. To select FLI, packets are formed such that each one of them contains
symbols from all the 99 MBs in a frame. With GLI, a frame is first divided into nine groups of 11
MBs and interleaving is performed across each group. For instance, a packet will contain
symbols from a group of 11 MBs. This whole process is repeated for each frame after which the
packets are transmitted over a communication channel. Further details on the interleaving
schemes are given in section 2.2. Figure 2 illustrates the decoder of the proposed system.
Frame Level
De-Interleaving
Group Level
De-
Interleaving
UEP
EEP
RS
Decode
H.264
Decoder
Packets
101000..
110111..
001001..
101011..
MB 1
MB 2
MB 3
MB 99
Convert
into bits
7 2 1 3..
9 1 4 8 ..
8 3 4 9..
4 7 3 1..
MB 1
MB 2
MB 3
MB 99
Prioritised
Concealment
Output Video
S3 S4
1
2
1
2
Figure 2. Proposed Decoder
4. ISSN: 2502-4752
IJEECS Vol. 6, No. 3, June 2017 : 671 – 681
674
De-Interleaving is first performed at the decoder so that the symbols are restored into
their original positions. Either FLI or GLI de-interleaving is performed depending on the type of
interleaving used at the encoder. If FLI de-interleaving is used switch S3 is placed at position 1.
Otherwise, if GLI de-interleaving is used S3 is placed at position 2. RS decoding is then
performed using either UEP or EEP. If the UEP scheme is used, S4 is placed in position 1 and
for the EEP scheme, S4 is placed in position 2. RS decoding is then performed on each packet
where error correction is performed. The RS decoder outputs arrays of symbols which are then
converted into bits before sending them to the H.264 decoder. The H.264 decoder converts the
bits into frames. Finally, prioritised concealment as proposed in [16] is performed.Concealment
of MBs is required because at high packet loss rates, error correction might be beyond the
capability of the RS decoder.
2.1. Unequal Error Protection (UEP)
In this work, UEP has been used with RS codes to offer higher protection to the most
important MBs from packet loss. The importance of a MB is determined by two factors which is
the type of concealment, either spatial or temporal, and the CDI value. In video compression, an
I-frame represents a frame which is intra coded and is least compressed as compared to a P
frame which is inter coded and more compressed. Therefore, an I frame needs to be given more
protection as compared to a P frame which, in fact, uses the I frame as a reference frame for
decoding. The CDI metric calculates the total number of MBs available to allow concealment of
a lost MB. In this work, FSE [16] which is spatial concealment method is used for I frames and
LI [17] which is a temporal concealment technique is used for P-Frames. For spatial
concealment, a total of 8 neighbouring MBs are required for an effective concealment of a lost
MB. On the other hand, for temporal concealment, 4 MBs located on the top, bottom, left and
right of the lost MB is needed for concealment.
Figure 3 illustrates four MBs: MB
I
M, MB
I
T, MB
P
M and MB
P
T. Figure 3(a) shows MB
I
M
which is located in the middle of an I frame. Since MB
I
M is surrounded by 8 MBs, MB1, MB2,
MB3, MB4, MB5, MB6, MB7 and MB8, in case MB
I
M is lost, all the surrounding MBs can be
used for spatial concealment. Therefore CDI is 8 in this case. Figure 3(b) shows MB
I
T which is
located in the topmost border of an I frame. MB
I
T is surrounded by only 3 MBs, MB1, MB2 and
MB3. In case MB
I
T is lost only 3 MBs will be available to be used for concealment, therefore CDI
is 3 in this case. Figure 3(c) shows MB
P
M which is located in the middle of a P frame. In case
MB
P
M is lost during transmission, it requires four MBs located in the top, bottom, right and left for
temporal concealment. Since MB
P
M islocated in the center, the neighbouring MBs: MB1, MB2,
MB3 and MB4 can be used for concealment, therefore CDI is 4 in this case. Figure 3(d) shows
MB
P
T which is located at the topmost border of a P frame. In case MB
P
M is lost during
transmission, only two MBs, MB1 and MB2 are available to be used for temporal concealment.
Therefore CDI is 2 in this case.
MBLMBI
M
MBL
MBI
T
MB1 MB2 MB3
MB4
MB5MB6MB7
MB8
MB1
MB2MB3
MBL
MBT MB1
(a) CDI = 8 (b) CDI = 3
(c) CDI = 4 (d) CDI = 2
P Frame
Temporal Concealment
I Frame
Spatial Concealment
MBP
T
MB2
MBT MB1MBP
M
MB2
MB3
MB4
Figure 3. CDI for spatial and temporal concealment
5. IJEECS ISSN: 2502-4752
A Concealment Aware UEP Scheme for H.264 using RS Codes (D. Indoonundon)
675
Hence, to ensure an effective spatial concealment, the CDI of a lost MB must be 8.
Furthermore, to achieve an efficient temporal concealment, the CDI of a lost MB must be 4.
Hence, higher protection needs to be given to MBs with lower CDI to ensure that they are
correctly received and can further be used for concealment of other MBs.
The rate assigned to each MB is given by Equation 2 [7, 8]:
(2)
Where:
k represents the length of the symbol array of a MB.n represents the length of the
codeword after RS encoding.
Therefore, n consists of k data symbols and n-k parity symbols. The error correcting
capability, t is given by Equation (3) [7, 8]:
(3)
From equation 2 and 3 it can be concluded that a high number of parity symbols leads
to a greater error correcting capability of the RS decoder. Consequently, the lower the rate, the
higher the number of parity symbols which can be used for error correction which therefore
results in greater protection.
Figure 4 illustrates a flowchart which describes the UEP rate allocation process.
First, the algorithm checks whether the MB belongs to an I frame or a P frame. If it
belongs to an I frame, higher protection is given to the MB. I-frame also implies that spatial
concealment must be used. Next, the algorithm checks whether, the CDI of the MB is less than
eight. If it is less than eight, this implies that in case of loss, the concealment of the MB will not
be effective. Higher protection is therefore given to a MB with CDI less than 8. A lower rate is
assigned to a P –frame as compared to an I frame. For a P frame if the CDI is less than 4,
higher protection is given to the MB.
Rate 1 is assigned to the MB belonging to an I frame and having a CDI less than eight.
Rate 2 is assigned to the MB belonging to an I frame and having a CDI of eight. Rate 3 is
assigned to the MB belonging to a P frame and having a CDI less than four. Rate 4 is assigned
to the MB belonging to a P frame and having a CDI of four.
In general, Rate 1 < Rate 2 < Rate 3 < Rate 4.
Does MB belong to
an I Frame?
Is CDI < 8? Is CDI < 4?
YES
NO
YES NO YES NO
Rate 4Rate 3Rate 2Rate 1
P-Frame
I-Frame
Higher Rate
Lower Rate
Lower Rate Lower RateHigher Rate Higher Rate
Figure 4. Flowchart Illustrating the Rate Allocation process
6. ISSN: 2502-4752
IJEECS Vol. 6, No. 3, June 2017 : 671 – 681
676
2.2. Interleaving
Consider for example a frame with 99 MBs as shown in Figure 5. In the interleaving
block, packets are formed in such a way that each of them consists of symbols from either a
group of 11 MBs referred as GLI or a whole frame consisting of 99 MBs namely FLI interleaving.
This allows information pertaining to a specific MB to be propagated across several packets. In
case of packet loss, the MB can be recovered with the information contained in the remaining
correctly received packets. Therefore, with the use of the parity symbols, the RS decoder can
perform error correction. A description of each interleaving scheme is given next.
2.2.1. FLI
Figure 5 illustrates the FLI interleaving process. Figure 5(a) shows a typical Foreman
video frame of size 144x176 which is made up of 99 MBs each of size 16x16. In addition, each
MB after RS encoding is represented by an array of length, n, which comprises of both data and
parity symbols.
The idea behind FLI interleaving is to form packets made up of symbols from each of
the 99 MBs. Ns which represents the number of symbols required from each MB to creates
packet is obtained using equation (4) below:
(4)
Where,
nrepresents the length of a symbol array representing a MB.
For example, in Figure 5, packet 1 is created by taking the first symbols from each of
the 99 MBs. This process continues until the last packet (packet 99) is created using the last
symbols of length, from each MB. After this process, all the symbols have been re-arranged
to form 99 packets each of length n.
4 7 8 8 3 7 3 3 ….. 8 5 5 7 2 5 6 8 1 3 …. 8 9 8 1 4 5 6 9 3 6 ... 2 6
MB 1 MB 2 MB 99
4 7 5 7 …………..….. 8 1
Ns symbols
from MB1 Ns symbols
from MB2
Ns symbols
from MB 99
Packet 1
Packet 2
8 5 8 9 ………………….2 6
Packet 99
1 432 5 876 9 1110
12 151413 16 191817 20 2221
23 262524 27 302928 31 3332
34 373635 38 414039 42 4443
45 484746 49 525150 53 5554
56 595857 60 636261 64 6665
67 706968 71 747372 75 7776
78 818079 82 858483 86 8887
89 929190 93 969594 97 9998
Frame Macroblock Allocation
(a) A typical frame from
Foreman video
(b) A sample Frame consisting
of 99 Macroblocks
Interleaver
H.264
Encoder
RS
Encoder
Figure 5. Illustration of Frame Level Interleaving process
7. IJEECS ISSN: 2502-4752
A Concealment Aware UEP Scheme for H.264 using RS Codes (D. Indoonundon)
677
2.2.2. GLI
Figure 6 illustrates the GLI process where a typical Foreman video frame of size
144x176 is made up of 9 rows (slices) each containing 11 MBs. With GLI, the MBs are first
grouped to form 9 slices following which interleaving is performed across each slice. Therefore,
in this case each packet is made up of Nssymbols belonging to 11 MBs. With GLI, Ns, is
calculated as follows:
(5)
For example in Figure 6, it can be observed that packet 1 is formed from the first symbols of
length obtained from each of the 11 MBs belonging to slice 1. Packet 11 is created by using
the last symbols of length, , from each of the 11 MBs. This process is repeated until all the 9
slices are interleaved to form 99 packets.
1 432 5 876 9 1110
12 151413 16 191817 20 2221
23 262524 27 302928 31 3332
34 373635 38 414039 42 4443
45 484746 49 525150 53 5554
56 595857 60 636261 64 6665
67 706968 71 747372 75 7776
78 818079 82 858483 86 8887
89 929190 93 969594 97 9998
1 432 5 876 9 1110
12 151413 16 191817 20 2221
23 262524 27 302928 31 3332
34 373635 38 414039 42 4443
45 484746 49 525150 53 5554
56 595857 60 636261 64 6665
67 706968 71 747372 75 7776
78 818079 82 858483 86 8887
89 929190 93 969594 97 9998
Group 1
Group 2
Group 3
Group 4
Group 5
Group 6
Group 7
Group 8
Group 9
Frame Macroblock Allocation
(a) A typical frame from
Foreman video
(b) A sample Frame consisting
of 99 Macroblocks (c) Slices from a Frame
4 7 8 8 3 7 3…..6 3 8 5 5 7 2 5 6 8 1 …5 3 8 9 8 1 4 5 6 9 3 6..1 6 2 6
MB 1 MB 2 MB 11
4 7 8 8 5 7 2 5….8 1 4 5
Ns symbols
from MB1
Ns symbols
from MB2
Ns symbols from
MB 11
Packet 1
6 3 8 5 5 3 8 9..……1 6 2 6
Packet 11
Each Slice is processed by the
Interleaver
H.264
Encoder
RS
Encoder
Figure 6. Illustration of Group Level Interleaving process
3. Results and Analysis
In line with the goals of this work, four different schemes have been compared. The list
of parameters used for the simulations is given below:
Video Sequences : Foreman, News
Framerate : 25 frames per second
Number of Frames : 300
Frame size : 144x176
Channel Model : Gilbert Elliot Channel.
The schemes have been simulated using Matlab. The UEP and EEP rate allocation
schemes for each video sequence are given in Table 1.
8. ISSN: 2502-4752
IJEECS Vol. 6, No. 3, June 2017 : 671 – 681
678
Table 1. Rate Allocation for UEP and EEP schemes
Foreman Sequence News Sequence
UEP
Rate Allocation
EEP
Rate Allocation
UEP
Rate Allocation
EEP
Rate Allocation
I Frame
CDI < 8 1/4 7/20 1/3 1/2
CDI = 8 1/3 7/20 2/5 1/2
P Frame
CDI < 4 4/11 7/20 5/8 1/2
CDI = 4 1/2 7/20 2/3 1/2
A brief description of each scheme is given in Table 2 where scheme 1 represents the
proposed framework which uses UEP and prioritisation based on autocorrelation for
concealment [15]. Scheme 2 uses UEP but does not employ prioritised concealment. Scheme 3
uses EEP and prioritised concealment whereas Scheme 4 utilises EEP but does not make use
of prioritised concealment. In all the simulations, FSE [16] is used as spatial concealment and LI
[17] is used as temporal concealment. The simulation is carried out with both FLI and GLI and
the results are analysed.
Table 2. Schemes tested
Schemes Rate Allocation Prioritised Concealment
UEP EEP
1 Yes No Yes
2 Yes No No
3 No Yes Yes
4 No Yes No
3.1. Results using FLI using Foreman and News Sequences
Figure 7 shows a graph of Y-PSNR against Packet Loss Rate for the Foreman
sequence with GOP length = 3 using FLI. It is observed that scheme 1 which uses UEP and
prioritised concealment provides an average gain of 2.98 dB over scheme 3 which uses EEP
and prioritised concealment. This is because with UEP, higher protection is given to the most
important MBs, hence the MBs are recovered correctly. In addition, scheme 1 provides an
average gain of 0.96 dB in the range of 0.1 ≤ Packet Loss Rate ≤ 0.4 over scheme 2 which does
not use prioritised concealment.
Figure 8 shows a Graph of Y-PSNR against Packet Loss Rate for the News sequence
with GOP length = 3 using FLI. It is observed that scheme 1 which uses UEP and prioritised
concealment provides an average gain of 1.07 dB over scheme 3 which uses EEP and
prioritised concealment. In addition, scheme 1 provides an average gain of 0.214 dB in the
range of 0.1 ≤ Packet Loss Rate ≤ 0.4 over scheme 2 which does not use prioritised
concealment.
Figure 7. Graph of Y-PSNR(dB) against
Packet loss Rate for the Foreman sequence
using FLI
Figure 8. Graph of Y-PSNR(dB) against
packet loss rate for the News sequence using
FLI
9. IJEECS ISSN: 2502-4752
A Concealment Aware UEP Scheme for H.264 using RS Codes (D. Indoonundon)
679
3.2. Results with GLI using Foreman and News Sequences
Figure 9 shows a Graph of Y-PSNR against Packet Loss Rate for the Foreman
sequence with GOP length = 3 using GLI. It is observed that scheme 1 which uses UEP and
prioritised concealment provides an average gain of 1.26 dB over scheme 3 which uses EEP
and prioritised concealment. In addition, scheme 1 provides an average gain of 1.06 dB in the
range of 0.1 ≤ Packet Loss Rate ≤ 0.4 over scheme 2 which does not use prioritised
concealment.
Figure 10 shows a Graph of Y-PSNR against Packet Loss Rate for the News sequence
with GOP length = 3 using GLI. It is observed that scheme 1 which uses UEP and prioritised
concealment provides an average gain of 1.31 dB over scheme 3 which uses EEP and
prioritised concealment. In addition, scheme 1 provides an average gain of 0.215 dB in the
range of 0.1 ≤ Packet Loss Rate ≤ 0.4 over scheme 2 which does not use prioritised
concealment.
Figure 9. Graph of Y-PSNR(dB) against Packet
loss Rate for the Foreman sequence using GLI
Figure 10. Graph of Y-PSNR(dB) against
Packet loss Rate for the News sequence
using GLI
3.3. Comparison between FLI and GLI
Figure 11 represents the results obtained by comparing the two types of interleaving,
FLI and GLI. It can be observed at low Packet Loss Rates (PLR <0.24), FLI is seen to
outperform GLI whereas at high Packet Loss Rates (PLR > 0.24) GLI surpasses FLI. For
example, at a Packet Loss Rate of 0.2 with the News sequence, FLI achieves a gain of 4.17 dB
over GLI. When using FLI, the burst errors are propagated across 99 packets as compared to
GLI where the errors are propagated across 11 packets of one group. Hence, the number of
errors contained in a packet created using FLI is less than that which uses GLI which allows the
RS decoder to correct all the errors when using FLI. While, when using GLI, the errors
contained in one packet are beyond the error correcting capability of the RS decoder.
Figure 11. Graph of Y-PSNR (dB) against Packet loss Rate comparing the performance of FLI
and GLI for both the Forman and news sequence
10. ISSN: 2502-4752
IJEECS Vol. 6, No. 3, June 2017 : 671 – 681
680
On the other hand at a Packet Loss Rate of 0.4 with the News sequence, GLI
outperforms FLI by 5 dB.At high Packet Loss Rate, the number of errors is beyond the error
correcting capability of the RS decoder when using both FLI and GLI. With the use FLI at high
Packet Loss Rates, a large amount of errors are distributed across 99 packets which causes
severe degradation of the whole frame. However, when using GLI at high Packet Loss Rates
the burst errors may affect only part of the 9 groups causing some of the groups to be correctly
decoded. This allows better performance of GLI as compared to FLI at high Packet Loss Rates.
4. Conclusion
The aim of this paper is to enhance the performance of H.264 video transmission with
the use of UEP with RS codes and two interleaving methods. The objective of using UEP with
RS codes is to protect the most important MBs from packet loss. The importance of a MB is
determined by two factors which are the type of concealment, spatial and temporal, and the CDI
value. Since an I-frame contains more significant information as compared to a P frame, higher
protection is given to an I frame. The CDI measures the effectiveness of the concealment
algorithm in case of packet loss. A low CDI will lead to an ineffective concealment. Therefore,
higher protection is given to MBs having lower CDI to ensure that they are corrected by the RS
decoder in case they have been subject to packet loss without the need for concealment. In
addition, two types of interleaving referred as FLI and GLI are proposed. The FLI technique
creates packets which consist of symbols from all the 99 MBs in a frame. With GLI, the MBs are
grouped into slices and interleaving is performed within each group. FLI has shown to perform
better than GLI at low packet loss rates whereas GLI proved to be more efficient at high packet
loss rates. Simulation results have demonstrated that the proposed framework outperformed an
EEP scheme by an average gain of2.96 dB when using the Foreman sequence with FLI.
Moreover, the use of prioritised concealment has shown to further improve the results by an
average gain of 0.96 dB.Numerous future works can be considered from this proposed
technique.An interesting future work would be to explore the way in which an existing FMO and
MDC scheme can be integrated with the proposed scheme. Finally the proposed scheme can
be analysed with H.265 and the results compared with H.264.
Acknowledgements
The authors are thankful to the University of Mauritius and the Tertiary Education
Commission for providing the necessary facilities and financial support for conducting this
research.
References
[1] Wiegand T, Draft ITU-T Recommendation and Final Draft International Standard of Joint Video
Specification (ITU-T Rec. H.264 |ISO/IEC 14496-10AVC, ITU-T Rec.H.264 | ISO/IEC 14496-10AVC.
2003.
[2] Bross B, Han WJ, Ohm JR, Sullivan GJ, Wiegand T. High Efficiency Video Coding (HEVC) Text
Specification Draft 9, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and
ISO/IEC JTC 1/SC 29/WG 11, document JCTVC-K1003, Shanghai, China. 2012.
[3] Richardson I. The h.264 advanced video compression standard. 1st ed. Hoboken, N.J.: Wiley; 2013.
[4] Di Laura C, Pajuelo D, Kemper G. A Novel Steganography Technique for SDTV-H.264/AVC Encoded
Video. International Journal of Digital Multimedia Broadcasting. 2016; 2016: 1-9.
[5] Hamdy S, Ibrahim M, Osman M. Performance evaluation of a new efficient H.264 intraprediction
scheme. Turkish Journal of Electrical Engineering & Computer Sciences. 2016; 24: 1967-1983.
[6] Bernatin T, Sundari G, Gayethri B, Ramesh S. Performance Analysis of Rate Control Scheme in
H.264 Video Coding. Indian Journal of Science and Technology. 2016; 9(30).
[7] Reed I, Solomon G. Polynomial Codes Over Certain Finite Fields. Journal of the Society for Industrial
and Applied Mathematics. 1960; 8(2): 300-304.
[8] Clarke CKP, R&D White Paper. Reed-Solomon error correction. WHP 031. 2002.
[9] Talari A, Kumar S, Rahnavard N, Paluri S, Matyjas J. Optimized cross-layer forward error correction
coding for H.264 AVC video transmission over wireless channels. EURASIP Journal on Wireless
Communications and Networking. 2013; 2013(1): 206.
11. IJEECS ISSN: 2502-4752
A Concealment Aware UEP Scheme for H.264 using RS Codes (D. Indoonundon)
681
[10] Jayasooriya J, Midipolawatta Y, Dissanayake M. Error correction technique based on forward error
correction for H.264 codec. 2012 IEEE 7th International Conference on Industrial and Information
Systems (ICIIS). 2012.
[11] Pham CK, Tung WC, Huu TH, Dao DT. Point-to-point H.264 Video Streaming over IEEE 802.15.4
with Reed-Solomon Error Correction. International Conference on Green and Human Information
Technology (ICGHIT). 2014: 89-93.
[12] Sun Y, Zhang X, Tang F, Fowler S, Cui H, Dong X. Layer-Aware Unequal Error Protection for
Scalable H.264 Video Robust Transmission over Packet Lossy Networks. 2011 14th International
Conference on Network-Based Information Systems. 2011.
[13] Zhao, Zenghua, Long S. RD-Based Adaptive UEP for H.264 Video Transmission in Wireless
Networks. 2010 International Conference on Multimedia Information Networking and Security. 2010.
[14] Nazir S, Stanković V, Andonović I, Vukobratović D. Application Layer Systematic Network Coding for
Sliced H.264/AVC Video Streaming. Advances in Multimedia. 2012; 2012: 1-9.
[15] Fowdur TP, D Indoonundon and Soyjaudah K M S. A Novel Prioritised Concealment and Flexible
Macroblock Ordering Scheme for Video Transmission. International Journal of Computer
Applications. 2016; 150(6): 35-42.
[16] 5. Kaup A, Meisinger K, Aach T. Frequency selective signal extrapolation with applications to error
concealment in image communication. AEU - International Journal of Electronics and
Communications. 2005; 59(3): 147-156.
[17] Zheng J, Chau LP. A temporal error concealment algorithm for (H.264) using Lagrange interpolation.
Proceedings of the 2004 International Symposium on Circuits and Systems, ISCAS 04. 2004: 133–
136.