The file size and picture quality are factors to be considered for streaming, storage and transmitting videos
over networks. This work compares Cinepak, Intel, Microsoft Video and Indeo Codec for video
compression. The peak signal to noise ratio is used to compare the quality of such video compressed using
AVI codecs. The most widely used objective measurement by developers of video processing systems is
Peak Signal-to-Noise Ratio (PSNR). Peak Signal to Noise Ration is measured on a logarithmic scale and
depends on the mean squared error (MSE) between an original and an impaired image or video, relative to
(2n-1)2.
Previous research done regarding assessing of video quality has been mainly by the use of subjective
methods, and there is still no standard method for objective assessments. Although it has been considered
that compression might not be significant in future as storage and transmission capabilities improve, but at
low bandwidths compression makes communication possible.
Subjective Quality Evaluation of H.264 and H.265 Encoded Video Sequences Stre...ijma
Streaming of high-quality video contents as part of multimedia communication, has become essential nowadays. Video delivered over the network suffers from different kind of impairments, which degrades its quality. Such network impairments effect differs among different types of codec used. This paper present the effects of network degradation factors such as packet loss and jitter over H.264 and H.265 encoded video sequences. In addition to the codec used, we also focused on the different level of temporal and spatial aspect within the videos. Among different basic test methods, double stimulus impairments scale was used to complete the experiment as subjective measures of assessment metric from user’s perspective.The result illustrates that differently encoded video sequences react differently to the network impairments and are very sensitive to a transmission error. Similarly, it also shows that user’s experience is affected according to the motion level of video.
An Overview on Multimedia Transcoding Techniques on Streaming Digital Contentsidescitation
The current IT infrastructure as well as various
commercial applications are directly formulated based on
deployment in multimedia system e.g. education, marketing,
risk management, tele-medicines, military etc. One of the
challenges found in using such application is to deliver
uninterrupted stream of video between multiple terminals
e.g. smart-phone, PDAs, laptops, IPTV etc. The research shows
that there is a stipulated need of designing novel mechanism
of bit rate adjustment as well as format conversion policy so
that the source stream may stream well in diverse end devices
with multiple configuration of processor, memory, decoding
etc. This paper discusses various eminent points from
literature that will throw better highlights in understanding
a schema of direct digital-to-digital data conversion of one
encoding to another termed as transcoding. Although
multimedia transcoding has covered more than a decade in
the area of research, but unfortunately, there is a huge trade-
off between the application, service, resource constraint, and
hardware design that gives rise to QoS issues.
High Definition (HD) devices requires HD-videos for the effective uses of HD devices. However, it consists of some issues such as high storage capacity, limited battery power of high definition devices, long encoding time, and high computational complexity when it comes to the transmission, broadcasting and internet traffic. Many existing techniques consists these above-mentioned issues. Therefore, there is a need of an efficient technique, which reduces unnecessary amount of space, provides high compression rate and requires low bandwidth spectrum. Therefore, in the paper we have introduced an efficient video compression technique as modified HEVC coding based on saliency features to counter these existing drawbacks. We highlight first, on extracting features on the raw data and then compressed it largely. This technique makes our model powerful and provides effective performance in terms of compression. Our experiment results proves that our model provide better efficiency in terms of average PSNR, MSE and bitrate. Our experimental results outperforms all the existing techniques in terms of saliency map detection, AUC, NSS, KLD and JSD. The average AUC, NSS and KLD value by our proposed method are 0.846, 1.702 and 0.532 respectively which is very high compare to other existing technique.
The document discusses the CARE software documentation and evaluation tool called Source2VALUE. It provides an overview of the Source2VALUE market and solution. The market section outlines business IT issues addressed like consolidation, modernization, compliance, and rapid development. It also discusses software life cycle costs. The solution section describes the supported programming languages, functionality including quality metrics and design reproduction, the CARE approach and benefits like improved software quality and reduced maintenance costs. Finally, a demo of the Source2VALUE tool is presented.
Siva Srinivas is a Senior Technical Specialist at TPVision in Bangalore, India. He has over 9 years of experience developing Android and embedded software solutions. His skills include Java, C/C++, Android, REST APIs, DVB standards, and Agile methodologies. He has experience leading projects for digital TV applications and set-top boxes.
Keynote - Introducing the Digital Home Working Group - G Stonemfrancis
The Digital Home Working Group (DHWG) was formed in 2003 by 17 industry leaders to establish a platform for interoperability in the digital home based on open standards. The DHWG aims to enable easy sharing of digital content like music, pictures and video between devices in compelling new ways. Current challenges include complex and proprietary home networks that limit interoperability. The DHWG seeks to address this through guidelines for open formats, transports and protocols to allow a transparent home network. Membership has grown significantly and the DHWG is working on initial guidelines for digital home device interoperability.
The H.264/AVC Advanced Video Coding Standard: Overview and ...Videoguy
This document provides an overview of the H.264/AVC video coding standard and its Fidelity Range Extensions (FRExt). It discusses how H.264/AVC was developed jointly by ISO/IEC MPEG and ITU-T VCEG to improve coding efficiency over prior standards. The FRExt amendment adds support for higher chroma sampling, bit depths, and other capabilities for demanding professional applications. Initial industry feedback indicates rapid adoption of the High Profile added in FRExt.
Real-time, non-Intrusive Evaluation of VoIPadil raja
This document describes research into developing non-intrusive models for estimating voice over IP (VoIP) call quality using genetic programming. The researchers created a VoIP simulation environment and used it to generate training data representing different network conditions. Genetic programming experiments were performed to evolve models relating network parameters like packet loss to MOS scores. The best models achieved good accuracy compared to intrusive quality estimation methods, using only 1-3 variables. The models allow real-time VoIP quality monitoring without a reference signal.
Subjective Quality Evaluation of H.264 and H.265 Encoded Video Sequences Stre...ijma
Streaming of high-quality video contents as part of multimedia communication, has become essential nowadays. Video delivered over the network suffers from different kind of impairments, which degrades its quality. Such network impairments effect differs among different types of codec used. This paper present the effects of network degradation factors such as packet loss and jitter over H.264 and H.265 encoded video sequences. In addition to the codec used, we also focused on the different level of temporal and spatial aspect within the videos. Among different basic test methods, double stimulus impairments scale was used to complete the experiment as subjective measures of assessment metric from user’s perspective.The result illustrates that differently encoded video sequences react differently to the network impairments and are very sensitive to a transmission error. Similarly, it also shows that user’s experience is affected according to the motion level of video.
An Overview on Multimedia Transcoding Techniques on Streaming Digital Contentsidescitation
The current IT infrastructure as well as various
commercial applications are directly formulated based on
deployment in multimedia system e.g. education, marketing,
risk management, tele-medicines, military etc. One of the
challenges found in using such application is to deliver
uninterrupted stream of video between multiple terminals
e.g. smart-phone, PDAs, laptops, IPTV etc. The research shows
that there is a stipulated need of designing novel mechanism
of bit rate adjustment as well as format conversion policy so
that the source stream may stream well in diverse end devices
with multiple configuration of processor, memory, decoding
etc. This paper discusses various eminent points from
literature that will throw better highlights in understanding
a schema of direct digital-to-digital data conversion of one
encoding to another termed as transcoding. Although
multimedia transcoding has covered more than a decade in
the area of research, but unfortunately, there is a huge trade-
off between the application, service, resource constraint, and
hardware design that gives rise to QoS issues.
High Definition (HD) devices requires HD-videos for the effective uses of HD devices. However, it consists of some issues such as high storage capacity, limited battery power of high definition devices, long encoding time, and high computational complexity when it comes to the transmission, broadcasting and internet traffic. Many existing techniques consists these above-mentioned issues. Therefore, there is a need of an efficient technique, which reduces unnecessary amount of space, provides high compression rate and requires low bandwidth spectrum. Therefore, in the paper we have introduced an efficient video compression technique as modified HEVC coding based on saliency features to counter these existing drawbacks. We highlight first, on extracting features on the raw data and then compressed it largely. This technique makes our model powerful and provides effective performance in terms of compression. Our experiment results proves that our model provide better efficiency in terms of average PSNR, MSE and bitrate. Our experimental results outperforms all the existing techniques in terms of saliency map detection, AUC, NSS, KLD and JSD. The average AUC, NSS and KLD value by our proposed method are 0.846, 1.702 and 0.532 respectively which is very high compare to other existing technique.
The document discusses the CARE software documentation and evaluation tool called Source2VALUE. It provides an overview of the Source2VALUE market and solution. The market section outlines business IT issues addressed like consolidation, modernization, compliance, and rapid development. It also discusses software life cycle costs. The solution section describes the supported programming languages, functionality including quality metrics and design reproduction, the CARE approach and benefits like improved software quality and reduced maintenance costs. Finally, a demo of the Source2VALUE tool is presented.
Siva Srinivas is a Senior Technical Specialist at TPVision in Bangalore, India. He has over 9 years of experience developing Android and embedded software solutions. His skills include Java, C/C++, Android, REST APIs, DVB standards, and Agile methodologies. He has experience leading projects for digital TV applications and set-top boxes.
Keynote - Introducing the Digital Home Working Group - G Stonemfrancis
The Digital Home Working Group (DHWG) was formed in 2003 by 17 industry leaders to establish a platform for interoperability in the digital home based on open standards. The DHWG aims to enable easy sharing of digital content like music, pictures and video between devices in compelling new ways. Current challenges include complex and proprietary home networks that limit interoperability. The DHWG seeks to address this through guidelines for open formats, transports and protocols to allow a transparent home network. Membership has grown significantly and the DHWG is working on initial guidelines for digital home device interoperability.
The H.264/AVC Advanced Video Coding Standard: Overview and ...Videoguy
This document provides an overview of the H.264/AVC video coding standard and its Fidelity Range Extensions (FRExt). It discusses how H.264/AVC was developed jointly by ISO/IEC MPEG and ITU-T VCEG to improve coding efficiency over prior standards. The FRExt amendment adds support for higher chroma sampling, bit depths, and other capabilities for demanding professional applications. Initial industry feedback indicates rapid adoption of the High Profile added in FRExt.
Real-time, non-Intrusive Evaluation of VoIPadil raja
This document describes research into developing non-intrusive models for estimating voice over IP (VoIP) call quality using genetic programming. The researchers created a VoIP simulation environment and used it to generate training data representing different network conditions. Genetic programming experiments were performed to evolve models relating network parameters like packet loss to MOS scores. The best models achieved good accuracy compared to intrusive quality estimation methods, using only 1-3 variables. The models allow real-time VoIP quality monitoring without a reference signal.
High Definition The Way Video Communications Was Meant to BeVideoguy
This document discusses the importance of high definition video communications and identifies critical success factors for technology decision makers to consider. It states that new technology promises 10 times the resolution of traditional videoconferencing systems and will dramatically advance video communications from a technical perspective. It identifies key criteria for assessing video communications technology, including visual realism, acoustic realism, and usage simplicity. Visual realism depends on video resolution, bandwidth standards, and camera technology while acoustic realism relies on input quality from microphone architecture and frequency response as well as output quality from spatial audio and minimal distortion.
This document summarizes the development of real-time video processing IP cores in FPGA by NeST including a video scaler, sharpness enhancer, gamma correction, and picture quality enhancer modules. It describes the specifications, algorithms, and architectures of each module developed as reusable IP cores. The video scaler uses bilinear interpolation for scaling up and nearest neighbor for scaling down. The sharpness enhancer uses a Laplacian filter. Gamma correction uses programmable lookup tables. The picture quality enhancer contains brightness, contrast, and color adjustment modules. Together these cores form a video processing suite for applications like surveillance and medical imaging.
Sanjay Kumar provides his contact information and summarizes his experience as an intuitive and proactive embedded systems engineer with over 12 years of experience in system development, C/C++ programming, Linux device drivers, kernel modules, ARM architecture, testing, automation, and tools like Python and Bash. He has expertise in debugging real-time multi-threaded and multi-process embedded applications.
Video communications is growing in use. Its adoption by larger and larger audiences will require IT managers and service providers to validate their networks for video communication. This presentation will explain the various problems affecting video quality and the ways in which video quality analysis can assist fleshing out these problems from networks.
AN OPTIMIZED H.264-BASED VIDEO CONFERENCING SOFTWARE FOR ...Videoguy
This document describes an optimized H.264 video conferencing software for mobile devices. It discusses:
1) The implementation of a highly optimized H.264 codec called DAVC that achieves better compression efficiency and encoding speed than other H.264 codecs.
2) How the DAVC codec was adapted for mobile devices to encode and decode video in real-time at lower resolutions and frame rates within the performance constraints of mobile processors.
3) A peer-to-peer video conferencing application that uses the DAVC codec and integrates mobile and stationary users into distributed group video calls by leveraging SIP for signaling and efficient peer-to-peer media distribution.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes two video watermarking algorithms that use Singular Value Decomposition (SVD). The first algorithm embeds watermark bits diagonally in the SVD-transformed U, S, or V matrices of video frames. The second algorithm embeds bits in blocks of the U or V matrices. Both algorithms were evaluated based on imperceptibility, robustness, and data payload. The diagonal embedding achieved better robustness while the block-wise embedding had a higher data payload rate. SVD transforms video frames, distributing the watermark across spatial and frequency domains for improved imperceptibility and robustness against attacks.
Real-time Video Quality Assessment for Analog Television Based on Adaptive Fu...TELKOMNIKA JOURNAL
Real-time VQA (Video Quality Assessment) is an important part in the effort to build tracking
antenna system especially for analog TV. In this case, VQA must work in real-time to assess the video
clarity level. VQA assessment results are valuable information for the decision-making process. Thus, the
antenna can rotate automatically looking for the ideal direction without user’s control. In addition, the video
clarity level on the TV screen can reach optimum according to the user's wishes. The biggest challenge to
VQA is, VQA must be able to assess the video clarity level according to the user’s visual perception.
Therefore, in this study, the MOS-VQS (Mean Opinion Score-Video Quality Subjective) was used as a
visual perception approach. In addition, Adaptive FIS (Fuzzy Inference System) with membership function
tuning was implemented for decision making. This was conducted as an effort to build a reliable real-time
VQA. The test results show that real-time VQA that has been built has a good performance. This is shown
from the average accuracy percentage of the lowest assessment reached 77.2% and the highest reached
88.2%.
This document discusses a proposed low bit rate audio codec algorithm using discrete wavelet transform. The key aspects of the algorithm are:
1. Choosing an optimal wavelet basis for audio signals and determining the optimal decomposition level in the discrete wavelet transform.
2. Applying thresholding to wavelet coefficients to truncate insignificant coefficients, allowing data compression while maintaining suitable peak signal to noise ratio.
3. Comparing performance of the audio codec using discrete wavelet transform to one using discrete wavelet packet transform.
4. Applying a postfiltering technique to improve the quality of the reconstructed audio signal by estimating and subtracting the error in the coded signal.
A new design reuse approach for voip implementation into fpsocs and asicsijsc
The aim of this paper is to present a new design reuse approach for automatic generation of Voice over Internet protocol (VOIP) hardware description and implementation into FPSOCs and ASICs. Our motivation behind this work is justified by the following arguments: first, VOIP based System on chip (SOC) implementation is an emerging research and development area, where innovative applications can be implemented. Second, these systems are very complex and due to time to market pressure, there is a need to built platforms that help the designer to explore with different architectural possibilities and choose the circuit that best correspond to the specifications. Third, we aim to develop in hardware, design, methods and tools that are used in software like the MATLAB tool for VOIP implementation. To achieve our goal, the proposed design approach is based on a modular design of the VOIP architecture. The originality of our approach is the application of the design for reuse (DFR) and the design with reuse (DWR) concepts. To validate the approach, a case study of a SOC based on the OR1K processor is studied. We demonstrate that the proposed SoC architecture is reconfigurable, scalable and the final RTL code can be reused for any FPSOC or ASIC technology. As an example, Performances measures, in the VIRTEX-5 FPGA device family, and ASIC 65nm technology are shown through this paper.
Quality of Experience and Quality of ServiceVideoguy
This document discusses quality of experience (QoE) and quality of service (QoS) for IP video conferencing. It explains that QoE is ensured by both application-based QoS (AQoS) and network-based QoS (NQoS). AQoS includes functions within the video conferencing application to enhance performance, like signaling protocols and media handling techniques. NQoS includes functions provided by the network, like scheduling and queuing. Both AQoS and NQoS work together to optimize the end-user experience of real-time video and audio transmission over IP networks.
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
.
This document provides a summary of Zain Isma Bin Mohamed Lazim's contact information, education, skills, and work experience. Some key points:
- Zain recently completed online courses in Javascript and Android development and holds a Master's degree in ICT focusing on telecommunications.
- He has over 10 years experience in product design and engineering, including at Sony and MIMOS where he improved sensors and developed wireless systems.
- Achievements include establishing an engineering group, managing over 100 PCB projects, and implementing cost-saving and environmentally friendly practices.
This document discusses EJB technology and provides summaries of key concepts:
1. It defines the EJB container model and describes features like security, distributed access, and lifecycle management.
2. It compares the lifecycles of stateless session beans, stateful session beans, entity beans, and message-driven beans.
3. It contrasts stateful and stateless session beans and discusses differences in client state, pooling, lifecycles, and more. It also compares session beans and entity beans in terms of representing processes versus data.
Power efficient and high throughput of fir filter using block least mean squa...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SUBJECTIVE QUALITY EVALUATION OF H.264 AND H.265 ENCODED VIDEO SEQUENCES STRE...ijma
Streaming of high-quality video contents as part of multimedia communication, has become essential
nowadays. Video delivered over the network suffers from different kind of impairments, which degrades its
quality. Such network impairments effect differs among different types of codec used. This paper present the
effects of network degradation factors such as packet loss and jitter over H.264 and H.265 encoded video
sequences. In addition to the codec used, we also focused on the different level of temporal and spatial
aspect within the videos. Among different basic test methods, double stimulus impairments scale was used
to complete the experiment as subjective measures of assessment metric from user’s perspective.The result
illustrates that differently encoded video sequences react differently to the network impairments and are
very sensitive to a transmission error. Similarly, it also shows that user’s experience is affected according
to the motion level of video.
This document discusses post-processing and rate distortion algorithms for the VP8 video codec. It first provides background on the need for post-processing algorithms to reduce blocking artifacts in compressed video, and for rate control algorithms to regulate bitrates and achieve high video quality within bandwidth constraints. It then summarizes existing in-loop deblocking filters and post-processing algorithms. A novel optimal post-processing/in-loop filtering algorithm is described that can achieve better performance than H.264/AVC or VP8 by computing optimal filter coefficients. Finally, a proposed rate distortion optimization algorithm for VP8 is discussed to improve its rate control and coding efficiency.
IBM VideoCharger and Digital Library MediaBase.docVideoguy
This document provides an overview of video streaming over the internet. It discusses video compression standards like H.261, H.263, MJPEG, MPEG1, MPEG2 and MPEG4. It also covers internet transport protocols like TCP and UDP, and challenges like firewall penetration. Both commercial streaming products and research projects aiming to improve streaming are reviewed, with limitations of current approaches outlined. The SuperNOVA research project is evaluated against other work seeking to make high quality video streaming over the internet practical.
The document provides information about MPEG compression standards. It discusses the history of MPEG and how it was established in 1988 as a joint effort between ISO and IEC to set standards for audio and video compression. It describes several MPEG standards including MPEG-1, MPEG-2, MPEG-4, MPEG-7, and MPEG-21. MPEG-4 is discussed in more detail, explaining that it offers greater efficiency than MPEG-2, allows encoding of mixed data types, and enables interaction of audio-visual scenes at the receiver end. The document contains diagrams and tables to illustrate key points about the different MPEG standards and compression techniques.
The impact of jitter on the HEVC video streaming with Multiple CodingHakimSahour
This document discusses the impact of jitter on video quality when streaming HEVC encoded video over wireless networks. It presents a study evaluating the effects of quantization parameter (QP) values, video content, and jitter on quality of experience (QoE). The study finds that using higher QP values, which lowers bitrate and increases compression, degrades video quality as measured by PSNR. It also finds that different video content results in varying PSNR values for the same encoding settings. Additionally, the results show that adjusting the QP value can help recover from the negative effects of jitter on received video quality. The document proposes using multiple description coding (MDC) to further improve transmission over error-prone wireless channels.
The surveillance systems are expected to record the videos in 24/7 and obviously it requires a huge storage space. Even though the hard disks are cheaper today, the number of CCTV cameras is also vertically increasing in order to boost up security. The video compression techniques is the only better option to reduce required the storage space; however, the existing video compression techniques are not adequate at all for the modern digital surveillance system monitoring as they require huge video streams. In this paper, a novel video compression technique is presented with a critical analysis of the experimental results.
PERFORMANCE EVALUATION OF H.265/MPEG-HEVC, VP9 AND H.264/MPEGAVC VIDEO CODINGijma
This study evaluates the performance of the three latest video codecs H.265/MPEG-HEVC, H.264/MPEGAVC
and VP9. The evaluation is based on both subjective and objective quality metrics. The assessment
metric Double Stimulus Impairment Scale (DSIS) is used to evaluate the subjective quality of the
compressed video sequences. The Peak Signal-to-Noise Ratio (PSNR) metricis used for the objective
evaluation. Moreover, this work studies the effect of frame rate and resolution on the encoders’
performance. The extensive number of experiments are conducted with similar encoding configurations for
the three studied encoders. The evaluation results show that H.265/MPEG-HEVC provides superior bitrate
saving capabilities compared to H.264 and VP9. However, VP9 shows lower encoding time than
H.265/MPEG-HEVC but higher encoding time compared to H.264.
A REAL-TIME H.264/AVC ENCODER&DECODER WITH VERTICAL MODE FOR INTRA FRAME AND ...csandit
The video coding standards are being developed to satisfy the requirements of applications for
various purposes, better picture quality, higher coding efficiency, and more error robustness.
The new international video coding standard H.264 /AVC aims at having significant
improvements in coding efficiency, and error robustness in comparison with the previous
standards such as MPEG-2, H261, H263,and H264. Video stream needs to be processed from
several steps in order to encode and decode the video such that it is compressed efficiently with
available limited resources of hardware and software. All advantages and disadvantages of
available algorithms should be known to implement a codec to accomplish final requirement.
The purpose of this project is to implement all basic building blocks of H.264 video encoder and
decoder. The significance of the project is the inclusion of all components required to encode
and decode a video in MatLab .
High Definition The Way Video Communications Was Meant to BeVideoguy
This document discusses the importance of high definition video communications and identifies critical success factors for technology decision makers to consider. It states that new technology promises 10 times the resolution of traditional videoconferencing systems and will dramatically advance video communications from a technical perspective. It identifies key criteria for assessing video communications technology, including visual realism, acoustic realism, and usage simplicity. Visual realism depends on video resolution, bandwidth standards, and camera technology while acoustic realism relies on input quality from microphone architecture and frequency response as well as output quality from spatial audio and minimal distortion.
This document summarizes the development of real-time video processing IP cores in FPGA by NeST including a video scaler, sharpness enhancer, gamma correction, and picture quality enhancer modules. It describes the specifications, algorithms, and architectures of each module developed as reusable IP cores. The video scaler uses bilinear interpolation for scaling up and nearest neighbor for scaling down. The sharpness enhancer uses a Laplacian filter. Gamma correction uses programmable lookup tables. The picture quality enhancer contains brightness, contrast, and color adjustment modules. Together these cores form a video processing suite for applications like surveillance and medical imaging.
Sanjay Kumar provides his contact information and summarizes his experience as an intuitive and proactive embedded systems engineer with over 12 years of experience in system development, C/C++ programming, Linux device drivers, kernel modules, ARM architecture, testing, automation, and tools like Python and Bash. He has expertise in debugging real-time multi-threaded and multi-process embedded applications.
Video communications is growing in use. Its adoption by larger and larger audiences will require IT managers and service providers to validate their networks for video communication. This presentation will explain the various problems affecting video quality and the ways in which video quality analysis can assist fleshing out these problems from networks.
AN OPTIMIZED H.264-BASED VIDEO CONFERENCING SOFTWARE FOR ...Videoguy
This document describes an optimized H.264 video conferencing software for mobile devices. It discusses:
1) The implementation of a highly optimized H.264 codec called DAVC that achieves better compression efficiency and encoding speed than other H.264 codecs.
2) How the DAVC codec was adapted for mobile devices to encode and decode video in real-time at lower resolutions and frame rates within the performance constraints of mobile processors.
3) A peer-to-peer video conferencing application that uses the DAVC codec and integrates mobile and stationary users into distributed group video calls by leveraging SIP for signaling and efficient peer-to-peer media distribution.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes two video watermarking algorithms that use Singular Value Decomposition (SVD). The first algorithm embeds watermark bits diagonally in the SVD-transformed U, S, or V matrices of video frames. The second algorithm embeds bits in blocks of the U or V matrices. Both algorithms were evaluated based on imperceptibility, robustness, and data payload. The diagonal embedding achieved better robustness while the block-wise embedding had a higher data payload rate. SVD transforms video frames, distributing the watermark across spatial and frequency domains for improved imperceptibility and robustness against attacks.
Real-time Video Quality Assessment for Analog Television Based on Adaptive Fu...TELKOMNIKA JOURNAL
Real-time VQA (Video Quality Assessment) is an important part in the effort to build tracking
antenna system especially for analog TV. In this case, VQA must work in real-time to assess the video
clarity level. VQA assessment results are valuable information for the decision-making process. Thus, the
antenna can rotate automatically looking for the ideal direction without user’s control. In addition, the video
clarity level on the TV screen can reach optimum according to the user's wishes. The biggest challenge to
VQA is, VQA must be able to assess the video clarity level according to the user’s visual perception.
Therefore, in this study, the MOS-VQS (Mean Opinion Score-Video Quality Subjective) was used as a
visual perception approach. In addition, Adaptive FIS (Fuzzy Inference System) with membership function
tuning was implemented for decision making. This was conducted as an effort to build a reliable real-time
VQA. The test results show that real-time VQA that has been built has a good performance. This is shown
from the average accuracy percentage of the lowest assessment reached 77.2% and the highest reached
88.2%.
This document discusses a proposed low bit rate audio codec algorithm using discrete wavelet transform. The key aspects of the algorithm are:
1. Choosing an optimal wavelet basis for audio signals and determining the optimal decomposition level in the discrete wavelet transform.
2. Applying thresholding to wavelet coefficients to truncate insignificant coefficients, allowing data compression while maintaining suitable peak signal to noise ratio.
3. Comparing performance of the audio codec using discrete wavelet transform to one using discrete wavelet packet transform.
4. Applying a postfiltering technique to improve the quality of the reconstructed audio signal by estimating and subtracting the error in the coded signal.
A new design reuse approach for voip implementation into fpsocs and asicsijsc
The aim of this paper is to present a new design reuse approach for automatic generation of Voice over Internet protocol (VOIP) hardware description and implementation into FPSOCs and ASICs. Our motivation behind this work is justified by the following arguments: first, VOIP based System on chip (SOC) implementation is an emerging research and development area, where innovative applications can be implemented. Second, these systems are very complex and due to time to market pressure, there is a need to built platforms that help the designer to explore with different architectural possibilities and choose the circuit that best correspond to the specifications. Third, we aim to develop in hardware, design, methods and tools that are used in software like the MATLAB tool for VOIP implementation. To achieve our goal, the proposed design approach is based on a modular design of the VOIP architecture. The originality of our approach is the application of the design for reuse (DFR) and the design with reuse (DWR) concepts. To validate the approach, a case study of a SOC based on the OR1K processor is studied. We demonstrate that the proposed SoC architecture is reconfigurable, scalable and the final RTL code can be reused for any FPSOC or ASIC technology. As an example, Performances measures, in the VIRTEX-5 FPGA device family, and ASIC 65nm technology are shown through this paper.
Quality of Experience and Quality of ServiceVideoguy
This document discusses quality of experience (QoE) and quality of service (QoS) for IP video conferencing. It explains that QoE is ensured by both application-based QoS (AQoS) and network-based QoS (NQoS). AQoS includes functions within the video conferencing application to enhance performance, like signaling protocols and media handling techniques. NQoS includes functions provided by the network, like scheduling and queuing. Both AQoS and NQoS work together to optimize the end-user experience of real-time video and audio transmission over IP networks.
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
.
This document provides a summary of Zain Isma Bin Mohamed Lazim's contact information, education, skills, and work experience. Some key points:
- Zain recently completed online courses in Javascript and Android development and holds a Master's degree in ICT focusing on telecommunications.
- He has over 10 years experience in product design and engineering, including at Sony and MIMOS where he improved sensors and developed wireless systems.
- Achievements include establishing an engineering group, managing over 100 PCB projects, and implementing cost-saving and environmentally friendly practices.
This document discusses EJB technology and provides summaries of key concepts:
1. It defines the EJB container model and describes features like security, distributed access, and lifecycle management.
2. It compares the lifecycles of stateless session beans, stateful session beans, entity beans, and message-driven beans.
3. It contrasts stateful and stateless session beans and discusses differences in client state, pooling, lifecycles, and more. It also compares session beans and entity beans in terms of representing processes versus data.
Power efficient and high throughput of fir filter using block least mean squa...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SUBJECTIVE QUALITY EVALUATION OF H.264 AND H.265 ENCODED VIDEO SEQUENCES STRE...ijma
Streaming of high-quality video contents as part of multimedia communication, has become essential
nowadays. Video delivered over the network suffers from different kind of impairments, which degrades its
quality. Such network impairments effect differs among different types of codec used. This paper present the
effects of network degradation factors such as packet loss and jitter over H.264 and H.265 encoded video
sequences. In addition to the codec used, we also focused on the different level of temporal and spatial
aspect within the videos. Among different basic test methods, double stimulus impairments scale was used
to complete the experiment as subjective measures of assessment metric from user’s perspective.The result
illustrates that differently encoded video sequences react differently to the network impairments and are
very sensitive to a transmission error. Similarly, it also shows that user’s experience is affected according
to the motion level of video.
This document discusses post-processing and rate distortion algorithms for the VP8 video codec. It first provides background on the need for post-processing algorithms to reduce blocking artifacts in compressed video, and for rate control algorithms to regulate bitrates and achieve high video quality within bandwidth constraints. It then summarizes existing in-loop deblocking filters and post-processing algorithms. A novel optimal post-processing/in-loop filtering algorithm is described that can achieve better performance than H.264/AVC or VP8 by computing optimal filter coefficients. Finally, a proposed rate distortion optimization algorithm for VP8 is discussed to improve its rate control and coding efficiency.
IBM VideoCharger and Digital Library MediaBase.docVideoguy
This document provides an overview of video streaming over the internet. It discusses video compression standards like H.261, H.263, MJPEG, MPEG1, MPEG2 and MPEG4. It also covers internet transport protocols like TCP and UDP, and challenges like firewall penetration. Both commercial streaming products and research projects aiming to improve streaming are reviewed, with limitations of current approaches outlined. The SuperNOVA research project is evaluated against other work seeking to make high quality video streaming over the internet practical.
The document provides information about MPEG compression standards. It discusses the history of MPEG and how it was established in 1988 as a joint effort between ISO and IEC to set standards for audio and video compression. It describes several MPEG standards including MPEG-1, MPEG-2, MPEG-4, MPEG-7, and MPEG-21. MPEG-4 is discussed in more detail, explaining that it offers greater efficiency than MPEG-2, allows encoding of mixed data types, and enables interaction of audio-visual scenes at the receiver end. The document contains diagrams and tables to illustrate key points about the different MPEG standards and compression techniques.
The impact of jitter on the HEVC video streaming with Multiple CodingHakimSahour
This document discusses the impact of jitter on video quality when streaming HEVC encoded video over wireless networks. It presents a study evaluating the effects of quantization parameter (QP) values, video content, and jitter on quality of experience (QoE). The study finds that using higher QP values, which lowers bitrate and increases compression, degrades video quality as measured by PSNR. It also finds that different video content results in varying PSNR values for the same encoding settings. Additionally, the results show that adjusting the QP value can help recover from the negative effects of jitter on received video quality. The document proposes using multiple description coding (MDC) to further improve transmission over error-prone wireless channels.
The surveillance systems are expected to record the videos in 24/7 and obviously it requires a huge storage space. Even though the hard disks are cheaper today, the number of CCTV cameras is also vertically increasing in order to boost up security. The video compression techniques is the only better option to reduce required the storage space; however, the existing video compression techniques are not adequate at all for the modern digital surveillance system monitoring as they require huge video streams. In this paper, a novel video compression technique is presented with a critical analysis of the experimental results.
PERFORMANCE EVALUATION OF H.265/MPEG-HEVC, VP9 AND H.264/MPEGAVC VIDEO CODINGijma
This study evaluates the performance of the three latest video codecs H.265/MPEG-HEVC, H.264/MPEGAVC
and VP9. The evaluation is based on both subjective and objective quality metrics. The assessment
metric Double Stimulus Impairment Scale (DSIS) is used to evaluate the subjective quality of the
compressed video sequences. The Peak Signal-to-Noise Ratio (PSNR) metricis used for the objective
evaluation. Moreover, this work studies the effect of frame rate and resolution on the encoders’
performance. The extensive number of experiments are conducted with similar encoding configurations for
the three studied encoders. The evaluation results show that H.265/MPEG-HEVC provides superior bitrate
saving capabilities compared to H.264 and VP9. However, VP9 shows lower encoding time than
H.265/MPEG-HEVC but higher encoding time compared to H.264.
A REAL-TIME H.264/AVC ENCODER&DECODER WITH VERTICAL MODE FOR INTRA FRAME AND ...csandit
The video coding standards are being developed to satisfy the requirements of applications for
various purposes, better picture quality, higher coding efficiency, and more error robustness.
The new international video coding standard H.264 /AVC aims at having significant
improvements in coding efficiency, and error robustness in comparison with the previous
standards such as MPEG-2, H261, H263,and H264. Video stream needs to be processed from
several steps in order to encode and decode the video such that it is compressed efficiently with
available limited resources of hardware and software. All advantages and disadvantages of
available algorithms should be known to implement a codec to accomplish final requirement.
The purpose of this project is to implement all basic building blocks of H.264 video encoder and
decoder. The significance of the project is the inclusion of all components required to encode
and decode a video in MatLab .
11.performance evaluation of mpeg 0004www.iiste.org call for-paper video tran...Alexander Decker
This document evaluates the performance of transmitting MPEG-4 video over IP networks using two different network types: best-effort and Quality of Service (QoS). The researchers used the NS-2 network simulator along with additional tools to encode video, transmit it over simulated networks, then evaluate the received video quality. Key performance metrics like peak signal-to-noise ratio (PSNR), throughput, and frame/packet statistics were calculated and compared between the best-effort and QoS networks. The results showed that transmitting MPEG-4 video over a QoS network achieved better performance than a best-effort network according to the measured metrics.
Performance evaluation of mpeg 4 video transmission over ip-networksAlexander Decker
This document summarizes research evaluating the performance of transmitting MPEG-4 video over IP networks with and without quality of service (QoS). The researchers transmitted MPEG-4 video over a best-effort network and a QoS network in a simulation using NS-2. They measured peak signal noise ratio, throughput, and frame/packet statistics to compare the performance between the two networks. The results showed that transmitting video over the QoS network performed better than over the best-effort network.
Video Streaming Compression for Wireless Multimedia Sensor NetworksIOSR Journals
This document discusses video streaming compression for wireless multimedia sensor networks. It proposes a cross-layer system that jointly controls the video encoding rate, transmission rate, and uses an adaptive parity scheme. At the application layer, video is compressed and divided into packets. These packets are encoded at the transport layer and forwarded through the network layer. An active buffer management scheme and adaptive parity check are used to maximize received video quality over lossy wireless links. Simulation results show the proposed techniques can achieve higher throughput by resequencing dropped packets. The goal is to design an efficient system for wireless transmission of compressed video that optimizes video quality.
With the advancement in internet technology, everyone has access to the internet. After google, YouTube is the second largest search engine and approximately 1 billion hours are consumed by people to watch video contents on YouTube. Editing the video and processing is not very easy. Network also plays an important role. With an unsteady network it can cause video to buffer which can reduce the streaming experience of users. Many people don’t even have a good computer which can handle the editing of large video files as editing and processing the video utilizes hardware, software and both. Many video editing software are available on the internet. Either it can be paid or open source software. One of the most popular open source software available on the internet is FFmpeg Fast Forward Moving Picture Expert Group . FFmpeg with other various software together can be used for video forensic to find traces in videos. It becomes very difficult to find traces from videos that are highly compressed or the video has low resolution. In earlier times, fetching data from camera of robots and encoding the data with software generates an issue. JNI,NDK, FFmpeg, researching about these video annotations a video player was created to examine video of sports so that user can see the how player evaluates the action practically with efficiently. Demand of multimedia increase as times goes on. Today in this global pandemic, everyone has move to digitalization. From studies to working everything has been digitalized. In this paper we are going to study about FFmpeg, how it benefits user with its features. Combining this highly popular multimedia framework with other software can create some useful technologies. Well, FFmpeg is mostly known for its memory efficiency and time efficiency. From processing image to editing videos everything can be acquired from FFmpeg. H. Sumesh Singha | Dr. Bhuvana J "A Study on FFmpeg Multimedia Framework" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42362.pdf Paper URL: https://www.ijtsrd.comcomputer-science/other/42362/a-study-on-ffmpeg-multimedia-framework/h-sumesh-singha
This document describes a project to design an H.264 video decoder using Verilog. It implements the key decoding blocks like Context-Based Adaptive Binary Arithmetic Coding (CABAC), inverse quantization, and inverse discrete cosine transform. CABAC is the entropy decoding method used in H.264 that is computationally intensive. The project develops hardware modules for these blocks to accelerate decoding and enable real-time performance. It presents the designs of the individual modules and simulation results showing their functionality. The goal is to improve on software implementations by using dedicated hardware for the critical decoding stages.
PERCEPTUALLY LOSSLESS COMPRESSION WITH ERROR CONCEALMENT FOR PERISCOPE AND SO...sipij
We present a video compression framework that has two key features. First, we aim at achieving
perceptually lossless compression for low frame rate videos (6 fps). Four well-known video codecs in the
literature have been evaluated and the performance was assessed using four well-known performance
metrics. Second, we investigated the impact of error concealment algorithms for handling corrupted pixels
due to transmission errors in communication channels. Extensive experiments using actual videos have
been performed to demonstrate the proposed framework.
The increasing use of digital video has put into perspective the storage requirements for a large number of video files. Different qualities of videos are available and each of these requires different storage areas. Different variety of devices having different pixel densities uses these files and accordingly has varying video processing capabilities. This paper presents a novel multimedia video encoding/decoding algorithm which is responsive to the device and responds to the device capabilities. This involves a light-weight, multi-layered approach whereby the video streams are divided into different layers and these are differentially loaded as per the quality requirements. This differential loading involves the progressive decoding algorithm for the video stream and henceforth a single file to be used vary the bit rate depending on several factors like the device processing capabilities, available memory, screen width and pixel density for different quality requirements. By in cooperating this method to WebRTC framework, the quality of video can be varied according to the network congestions. Our analysis reveals that this method is far more efficient than transcoding-on-air method which is used by multimedia servers which require special hardware support. For normal servers which use HTML5 for playback of video, this method reduces the total bandwidth required and saves a lot of storage space which results in smooth streaming over the web. This new approach makes the video files adaptive to the different devices on which they are running.
This document discusses the hardware and software requirements for multimedia computers. It covers several topics:
1. It describes different classes of multimedia applications including streaming stored/live audio and video, and real-time interactive audio and video.
2. It discusses various multimedia software like MCI, Video for Windows, QuickTime, DirectX, and authoring tools.
3. It outlines the hardware requirements for multimedia computers as defined by the MPC standards, including components like sound cards, video cards, network cards, and USB/MIDI ports.
Perceptually Lossless Compression with Error Concealment for Periscope and So...sipij
We present a video compression framework that has two key features. First, we aim at achieving perceptually lossless compression for low frame rate videos (6 fps). Four well-known video codecs in the literature have been evaluated and the performance was assessed using four well-known performance metrics. Second, we investigated the impact of error concealment algorithms for handling corrupted pixels
due to transmission errors in communication channels. Extensive experiments using actual videos have been performed to demonstrate the proposed framework.
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...IJECEIAES
The issue on how to effectively deliver video streaming contents over cloud computing infrastructures is tackled in this study. Basically, quality of service of video streaming is strongly influenced by bandwidth, jitter and data loss problems. A number of intelligent video streaming algorithms are proposed by using different techniques to deal with such issues. This study aims to propose and demonstrate a novel decision making analysis which combines ISO 9126 (international standard for software engineering) and Analytic Hierarchy Process to help experts selecting the best video streaming algorithm for the case of private cloud computing infrastructure. The given case study concluded that Scalable Streaming algorithm is the best algorithm to be implemented for delivering high quality of service of video streaming over the private cloud computing infrastructure.
Extract the Audio from Video by using pythonIRJET Journal
This document discusses extracting audio from video files using Python. It begins with an abstract describing a video to audio converter program that allows users to select a video file and convert it to audio, saving the audio file in the same folder. It then discusses the need for video to audio converters as people often prefer just listening to a video's audio rather than watching it. It describes using the moviepy module in Python to extract audio from video files by analyzing the video frame-by-frame and applying speech recognition to the audio. The goal is to create a system that can convert video and audio sources into text to help with documentation.
Similar to Comparison of Cinepak, Intel, Microsoft Video and Indeo Codec for Video Compression (20)
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Comparison of Cinepak, Intel, Microsoft Video and Indeo Codec for Video Compression
1. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
DOI : 10.5121/ijma.2015.7601 1
COMPARISON OF CINEPAK, INTEL, MICROSOFT
VIDEO AND INDEO CODEC FOR VIDEO
COMPRESSION
Suleiman Mustafa1
and Hannan Xiao2
1
Department of Engineering, National Agency for Science and Engineering
Infrastructure, Abuja, Nigeria
2
School of Computer Science, University of Hertfordshire Infrastructure, Hatfield, UK
ABSTRACT
The file size and picture quality are factors to be considered for streaming, storage and transmitting videos
over networks. This work compares Cinepak, Intel, Microsoft Video and Indeo Codec for video
compression. The peak signal to noise ratio is used to compare the quality of such video compressed using
AVI codecs. The most widely used objective measurement by developers of video processing systems is
Peak Signal-to-Noise Ratio (PSNR). Peak Signal to Noise Ration is measured on a logarithmic scale and
depends on the mean squared error (MSE) between an original and an impaired image or video, relative to
(2n-1)2.
Previous research done regarding assessing of video quality has been mainly by the use of subjective
methods, and there is still no standard method for objective assessments. Although it has been considered
that compression might not be significant in future as storage and transmission capabilities improve, but at
low bandwidths compression makes communication possible.
KEYWORDS
Video Compression, Codec, Peak Signal Noise Ratio, Video Quality Measurement
1. INTRODUCTION
High quality video in multimedia applications and wireless communication has generated interest
in digital communication services for sharing real-time video audio and data. This is as a result of
success and growth in this area. There is an increased demand in providing network portable
computers with access to the same services as wired computers [1]. Users of multimedia
applications expect a constant level of quality in video. It is necessary to determine the quality of
the video images displayed to the viewer when evaluating and comparing video. There has been
an increase in the demand for portable computers to provide universal connectivity similar to that
of wired networks. The IEEE 802.11 study group was aimed at providing an international
standard for WLANs, to satisfy the needs of wireless local area networks [15].
There are two techniques that could have used for evaluation of video quality which are
subjective and objective methods. Subjective methods can be difficult to perform and to gain an
accurate measure because of subjective factors. Therefore developers prefer objective measures
[4]. We decided to use Peak Signal Noise Ratio (PSNR) that is the most common method. It is
widely acceptable because it can be calculated easily and measures are repeatable, and for this
reason. One problem with the peak-signal to noise ratio is that an original unimpaired video needs
to be used to make a comparison. The availability of original image or video signals that are free
from distortion or perfect quality affects the type of objective measure that can be performed [10].
2. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
2
2. VIDEO COMPRESSION
There are two compression techniques, which are lossy and lossless compressions. Lossless
compression is where system statistical redundancy is minimized so that the original signal can be
perfectly reconstructed at the receiver only results in modest amount of compression. Lossy
compression achieves greater compression except that the decoded signal is not identical to the
original. Lossless compression can only achieve a modest amount of compression. Therefore
most practical techniques are based on lossy compression, which achieves greater compression,
but with loss of decoded signal [2]. The goal of a video compression is to achieve efficient
compression whistle minimizing the distortion introduced by the compression process.
2.1. Video Compression STANDARDS
This work lies within the MPEG-4 and H 264 compression standards for video compression.
MPEG-4 was developed to improve transmission of video over mobile devices. We did research
on the previous standards as well to understand how in has been improved for mobile
communications. These are MPEG-1, MPEG-2 and MPEG-3, which were all designed for
particular applications and bit rate. MPEG-1 was developed for up to 1.5Mbit/sec and widely
used for mpg files over the Internet. It is based on CD-ROM video application such as MP3 for
digital audio compression. MPEG-2 standard was based on digital television and DVD,
developed for 1.5 to 15Mbit/sec. It is an improvement over MPEG-1 for digital broadcast with
more efficient compressed interlaced video. MPEG-4 was developed for multimedia and web
compression and is an object-based compression [3].
The latest video compression standards MPEG-4 and H.264 standards also known as Advanced
Video Coding were important to cover as this work because they deal with multimedia
communications and are developed to assist wireless communication networks. MPEG-4 is
concerned with providing flexibility while H.264 is concerned with efficiency and reliability [2].
Figure 1 below shows the H.261 encoder system and the revere happens at the decoder. H.261
standard only specifies the decoding for each of the compression options, which allows
manufacturers of codec such as those used in this project to differentiate their product [6].
Figure 1 H.261 Encoder
Source: (Array Microsystems, Inc Video Compression white paper, 1997)
3. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
3
2.1. Objective Measurement of Video Quality
Quality metrics that can predict image and video quality automatically is the goal of objective
video quality assessment research. Common methods are mean square error (MSE) and peak
signal-to-noise ratio (PNSR) which make use of original reference [7].
For future development in objective measurement of video quality, peak signal-to-noise ratio
could be replaced with a metric that closely matches the behavior of the human visual system.
Color and motion plays a key role in defining the way we perceive video quality [4]. Calculating
distortion between videos may be unrealistic because the distortion occurs where the human eye
is less sensitive. Therefore methods that match closely to the subjective perception of the eye
would help produce more accurate and realistic results. With more time available the project
could be improved upon by using subjective methods to verify the objective measurements. This
would involve users to get judgments on which videos are of better quality.
The ITU-T video Quality Experts Group (VQEG) aims to compare and run test on potential
models of evaluation objectively. There is still no standard accurate system for digitally decoded
video but the aim is to have an automatic system to solve the problem of accurate measure. The
new generation of advanced video compression technologies, including MPEG-4 Part 10 (also
known as H.264/MPEG-4 AVC) and Windows Media Player 9, are key to the distribution of
broadcast-quality video over IP networks. Advanced video compression reduces bandwidth
requirements for high-quality video by about half compared to existing MPEG-2 technology. The
ability to implement the new advanced video compression standards has important benefits for
service providers [18].
From the work of the ITU study group VQEG (Visual Quality Experts’ Group); several
sophisticated quality measures were compared with PSNR. The results showed that for MPEG-2
coding at bit rates equivalent to 0.8Mbits/sec or more, none of the measures was significantly
more accurate than PSNR [16]. It can therefore be safely concluded that over some period of time
PSNR may be used as a benchmark for measuring video quality.
2.2. Codec Video
Coded video is produced at a variable or constant rate by a video encoder, and the average bit rate
and the bit rate variation are the important parameters. Distortion is introduced because the
original video signals are not similar to decoded video sequence. For quality of service, delay
depends on the method used for transmission such as broadcast, streaming, playback or video
conferencing [4]. Coded video has very low tolerance to delay unlike data. Dropped video
information cannot be retransmitted therefore a method that enables error control is used for
compressed video data [5]. The choice of what codec to use can be difficult because
manufacturers present their capabilities in different ways that best suit their product. The majority
of commercial ones seem to be aimed for streaming or storage applications [2].
The increase demand to incorporate video data into telecommunication services, the corporate
environment, the entertainment industry and the home has made video technology a necessity.
However it is still a problem that digital video data rates are very large and require a lot of
bandwidth. There are various forms in which uncompressed videos are encoded and decoded [2].
For this project the codec are all AVI video codec and there are two formats for coding AVI
video files used. The video data in an AVI file can be formatted and compressed in a variety of
ways [24].
4. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
4
3. RELATED WORK
Early researches aimed at comparison of video codec have used both subjective comparison with
convenient visualization and detailed objective comparisons. Series of subjective comparison of
popular codecs have been carried out. In one of such comparisons, all codecs were tested in a 2-
pass setup using the settings suggested by the developers. Various stages of testing were carried
out using a wide range of codec. The best ones according to the research were Ateme, x264,
XviD and DivX 6.1 in that order [22].
The main aim of comparing various codec is to investigate which co
mpression will produce a video of the smallest possible file size that is most close to the original
video. Most objective research has made use of peak signal-to-noise ratio as a video quality
metric [21]. MUS MPEG-4 SP/ASP Codec comparison for instance tested to compare different
versions of MPEG-4 codec. Tracing the evolution of the DivX codec was another goal of the
testing which showed that the latest DivX had the advantage with better quality of compression
performed by this codec over XviD [20].
Comparison of the Advanced Video Coding (AVC/H.264) standard by the VideoLAN x264
project, the VP8 codec provided by Google/WebM project and HEVC TMuC v0.5 has also been
carried out in another research. Videos content for streaming were tested for mobile devices. For
the same video distortion in terms of PSNR and structural similarities, revealed that on average
HEVC TMuC v0.5 provides 46% bitrate reduction in comparison to AVC/H.264, whuch in turn
provides 21% bitrate reduction compared to Google/WebM VP8 [23].
4. EXPERIMENT DESIGN
This work compares Cinepak, Intel, Microsoft Video and Indeo Codec for video compression.
Performing experiments to measure the quality of video compressed using Radius Cinepak, Intel
Indeo R3.2, Intel Indeo 4.5, Microsoft video 1 and Indeo Video codec did the research. The
method used for comparing the videos is peak-signal-to noise ratio. It is calculated by using the
mean square error (MSE) between an original and a compressed video frame relative to the
square of the highest possible signal value in the image (2n
-1)2
[4].
4.1. Methodology
The method used for the measurement of the video quality for this work is peak signal-to-noise
ratio (PSNR). It is calculated by using the mean square error (MSE) between an original and a
compressed video frame, relative to the square of the highest possible signal value in the image
(2n-1)2
[4]. Because of the two-dimensional matrix nature of a digital picture, SNR for an image
can be considered a matrix based quality parameter which is the peak signal to noise ratio in of a
digital video. The word peak refers to the maximum value of a pixel [7]. The unit of measure is
given in decibel units (dB). It relies on the pixel luminance and chrominance values of the input
and output video frames. It does not include any subjective human intervention in the quality
assessment [5].
As mentioned earlier in the project introduction, PSNR measures the difference between two
images and a reference and compressed image is used. For this project the reference or
unimpaired videos are the original uncompressed ones and the compressed videos are those
converted using the codec. PSNR analysis uses a standard mathematical model to measure an
objective difference between two images. It is commonly used in the development and analysis
of compression algorithms, and for comparing visual quality between different compression
systems. PSNR = 10*log10 (255^2/MSE)
5. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
5
Pseudo code for each pixel:
{
Difference = Pixel from Image A – Pixel from Image B
SummedError = SummedError + Difference * Difference
}
MeanSquaredError = SummedError / Number of Pixels
RMSE = sqrtt (MeanSquaredError)
PSNR = 20*log10 (255 / RMSE)
Where original video uses 8 bits per pixel and therefore the peak is 255.
Source: http://www.cineform.com/technology/HDQualityAnalysis10bit/HDmethodology10bit.htm
There are three values produced for each frame Y, U and V which represent the luminance and
chrominance. A frame consists of the three rectangular arrays of integer valued-samples. Y is
called luma and represents brightness, while U and V are the two Chroma sometimes represented
as Cb and Cr respectively. U (Cb) represents the extent to which the colour deviates from gray to
blue, while V (Cr) represents the extent to which the colour deviates from gray to red [8].
4.2. Choosing a Video Quality Metric
The Video Quality Studio 0.4 RC3 software used measures the luminance and chrominance of
each frame of the video files. There are two sections for selecting video files from the hard disk,
one for selecting the reference video and the other for the compressed/impaired video. The output
is produced on an excel file which shows the Y, U and V components of each frame. Videos
coding often uses a colour representation having three components of a tristimulus colour
representation for the spiral area represented in the image [8].
Attempts made in order to design a method for objective picture quality measure that improves
upon PSNR. This method is based on the concept of Just Noticeable Differences (JND) and
equipment that perform PSNR and JND-based measures are available. Specific single–ended
measurements of impairments introduced by compression have received some attention. This
method makes use of decoded video and information from the bit stream as well [16]. Objective
criteria produce accurate and repeatable results but there are yet no objective measurement
systems that completely reproduce the subjective experience when watching a video [4].
4.3. Selecting Video files
The video files had to be in AVI format for the Video Quality Studio 0.4 RC3 software to use.
AVI (Audio Video Interleave) as defined by Microsoft are basically a number of still images
called frames that are combined sequentially in one file. When the file is opened with a media
player it moves through the AVI file and displays each consecutive image in the same way that
movie film rolling through a projector displays a movie playing [11]. To notice the effect of the
compression we decided to use video files of different properties such as length of video, content
of video, size of file etc. The videos use range from those that are just a few seconds to those that
are up to a minute long, fast moving pictures and slow moving pictures and other characteristics.
All the videos used for the experiment have a frame rate = 23 frames/second, video sample size =
24 bits, and audio sample size = 16 bits. The original uncompressed reference video was recorded
with a 10.1 megapixel camera. The videos were converted to five different AVI formats using the
following codec: Radius Cinepak, Intel Indeo R3.2, Intel Indeo 4.5, Indeo Video 5.10 and
Microsoft Video 1. The table below shows the list of video files used:
6. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
6
Table 1 List of Video files and their properties.
File Name Codec Type Data rate
(kbps)
Size
(MB)
Frontalis- Video Clip[Cinepak] Cinepak codec by Radius 322 15.8
Frontalis- Video Clip[II 4.5] Intel Indeo video R3.2 299 10.2
Frontalis- Video Clip[II r3.2] Intel Indeo video 4.5 214 14.3
Frontalis- Video Clip[Indeo video] Indeo video 5.10 261 12.5
Frontalis- Video Clip[Microsoft] Microsoft Video 1 861 41.3
Frontalis- Video Clip[reference] (No codec) 1411 101
Infraspinatus- Video Clip[Cinepak] Cinepak codec by Radius 344 16.9
Infraspinatus- Video Clip[ II 4.5] Intel Indeo video R3.2 334 16.4
Infraspinatus- Video Clip[II r3.2] Intel Indeo video 4.5 230 11.3
Infraspinatus- Video Clip[Indeo video] Indeo video 5.10 189 13.7
Infraspinatus- Video Clip[Microsoft] Microsoft Video 1 883 43.4
Infraspinatus- Video Clip[reference] (No codec) 1435 104
Intro Question [Cinepak] Cinepak codec by Radius 495 24.3
Intro Question [II 4.5] Intel Indeo video R3.2 481 23.6
Intro Question [II r3.2] Intel Indeo video 4.5 331 16.3
Intro Question [Indeo video] Indeo video 5.10 272 19.7
Intro Question [Microsoft] Microsoft Video 1 1271 62.5
Intro Question [reference] (No codec) 2066 150
Quiz 02[Cinepak] Cinepak codec by Radius 155 7.6
Quiz 02[II 4.5] Intel Indeo video 4.5 241 8.2
Quiz 02[II r3.2] Intel Indeo video R3.2 136 9.1
Quiz 02[Indeo video] Indeo video 5.10 177 8.5
Quiz 02[Microsoft] Microsoft Video 1 757 36.3
Quiz 02[reference] (No codec) 1041 74.5
Race[Cinepak] Cinepak codec by Radius 322 15.8
Race[II 4.5] Intel Indeo video 4.5 299 10.2
Race[II r3.2] Intel Indeo video R3.2 214 14.3
Race[Indeo video] Indeo video 5.10 261 12.5
Race[Microsoft] Microsoft Video 1 861 41.3
Race[reference] (No codec) 1411 101
Alien Eye [Cinepak] Cinepak codec by Radius 837 41.1
7. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
7
Alien Eye [II 4.5] Intel Indeo video 4.5 777 26.5
Alien Eye [II r3.2] Intel Indeo video R3.2 556 37.2
Alien Eye [Indeo video] Indeo video 5.10 601 32.5
Alien Eye [Microsoft] Microsoft Video 1 3669 107.4
Alien Eye [reference] (No codec) 2176 262.6
4.4. Choosing Codec
The AVI MPEG video converter software to convert other forms of video files to AVI format.
Another advantage of the software was that we could convert to different forms of AVI videos
that use the different codec tested. The codec that it uses are Cinepak, Intel, Microsoft Video,
Indeo, Xvid and Divx codec, but we could not get all the videos to convert with Xvid and Divx.
The rest of the codec, which are Cinepak, Intel, Microsoft Video and Indeo all successfully,
convert the videos and were then used.
The manufactures tend to present their codec in ways that favor their product (I.E.G Richardson,
2002), but by studying them we gained a better understanding of how they work. There are three
common color-encoding schemes used in broadcast television systems around the world. Most
European countries and many other nations around the world use Phase Alternating Line (PAL).
It was introduced first in Britain and Germany in 1967. The United States and Japan use another
standard the NTSC (National Television System Committee). France and a few other nations use
the SECAM (Sequential Couleur Avec Memoire or Sequential Color with Memory) standard. Y
refers to the luminance, a weighted sum of the red, green, and blue components. The human
visual system is most sensitive to the luminance component of an image. Analog video systems
such as NTSC, PAL, and SECAM transmit color video signals as a luminance (Y) signal and two
color difference or chrominance signals [11]. Below is some brief information about the codec
used for compressing the videos in this experiment
4.4.1. Cinepak
The codec is a vector quantization based on image compression and frame with adaptive vector
density and each frame is segmented into 4x4 pixel blocks and each block is coded using 1 or 4
vectors. Intel Indeo also uses vector quantization based image compression. Rather than bit rate
versus quality performance, radius Cinepak comes from computational simplicity at the decoder.
This codec compressed most videos successfully at a quick rate [12].
3.4.2. Intel Indeo R3.2 and 4.5
Indeo was originally known as Real Time Video when Intel developed it in the 1980's. Indeo is
very similar to the Cinepak codec. It is well suited to CD-ROM, has fairly high compression
times, and plays back on a wide variety of machines. The recommended key frame interval for
Indeo is 4, regardless of the frame rate. Both codec compressed all videos successfully at a fast
rate [13].
4.4.3. Indeo Video 5.10
This is a new wavelet compression algorithm that greatly improves visual quality. It has been
optimized to provide the best possible performance for Indeo on fast systems especially with
newer scalability features. This codec compressed only some videos and took a longer time to do
8. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
8
compare to all the other codec. Those that could not be compressed were not used for the
experiments. For this same reason the number of videos originally planned for were reduced [14].
4.4.4. Microsoft Video
The Microsoft Video-1 codec has a simpler algorithm compared with other modern video
compression methods. The algorithm operates on 4x4 blocks of pixels, which implies that the
source data to be compressed must be divisible by 4 in both its width and height. Just like
decoding a Microsoft BMP image, decoding a frame of Video-1 data is a bottom-to-top operation
[17]. The codec did not compress as fast as Cinepak or the Intel Indeo codec but was faster than
Indeo Video. It also compressed only some of the videos, and those, which could not be
compressed, were not used for the experiment.
5. EXPERIMENT RESULTS
The values Y, U and V are luminance and chrominance of the video as discussed in the
experiment methodology earlier. Y is the luma and represents brightness, while U and V are the
two chroma represented at times as Cb and Cr respectively. U (Cb) represents the extent to which
the color deviates from gray to blue, while V (Cr) represents the extent to which the color
deviates from gray to red [8]. YUV is the color space used in the European PAL broadcast
television standard. U is very similar to the difference between the blue and yellow components
of a color image. V is very similar to the difference between the red and green components of a
color image. There is evidence that the human visual system processes color information into
something like a luminance channel, a blue – yellow channel, and a red - green channel. For
example, while we perceive blue-green hues, we never perceive a hue that is simultaneously blue
and yellow. This may be why the YUV color space of PAL is so useful [11].
The unit for peak signal-to-noise ratio is decibel units (dB) and a higher number indicates better
quality video. PSNR = 10*log10 (255^2/MSE), the original videos use 8 bits per Pixel, therefore
the peak = 255. Peak signal-to-noise ratio values greater than 35 dB are considered to be good
quality. As this project deals with colored videos the average values for the Y, U and V
components are indicated. The picture resolution and frame rate need to be indicated because bit
rate is directly proportional to the number of pixel per frame and the number of frames coded per
second. The total number of frames for each video, video resolution and the length are shown
below. A frame rate of 23 frames per second was adopted for all the experiments carried out. The
results shown below are the average peak signal-to-noise ratio values for the total number of
frames and the standard deviation for the peak signal-to-noise ratio values of the total number of
frames in each video. The tables below show the results for average and standard deviation for
each video and codec.
Table 2 Infraspinatus Video: – Frames = 1215 Frames, Length = 50 seconds, Resolution = 192 x 144
Radius
Cinepak
Intel Indeo
video R3.2
Intel Indeo
video 4.5
Indeo video
5.10
Microsoft
Video 1
Average Y=13.7345
U=25.0233
V=23.3655
Y=6.53445
U=20.2398
V=21.6978
Y = 39.1169
U = 42.3003
V = 42.0153
Y = 42.5232
U = 43.3078
V = 43.305
Y = 35.8655
U = 46.5737
V = 45.4756
Standard
Deviation
Y=0.428589
U=0.367764
V=0.407782
Y=0.072323
U =0.33221
V=0.325263
Y=0.594707
U=0.701137
V = 0.59316
Y=0.997392
U=0.793095
V=0.706637
Y =
0.234032
U =
0.448261
V =
0.442202
9. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
9
Table 3 Frontalis Video:– Frames = 1178 Frames, Length = 49 seconds, Resolution = 192 x 144
Radius
Cinepak
Intel Indeo
video R3.2
Intel Indeo
video 4.5
Indeo video
5.10
Microsoft
Video 1
Average Y=12.4925
U =21.21
V=17.9829
Y = 6.17462
U = 19.7556
V = 20.3806
Y = 43.0715
U = 43.9592
V = 43.5472
Y = 43.1586
U = 43.8729
V = 43.3251
Y = 37.0314
U = 45.4159
V = 44.7032
Standard
Deviation
Y=1.14014
U=0.715839
V=0.708884
Y=0.162804
U =0.3165
V=0.313484
Y=1.22018
U=1.14076
V=0.715536
Y=1.051
U=0.791641
V=0.615129
Y=0.232984
U=0.218801
V=0.185936
Table 4 Intro Question Video:- Frames = 1097 Frames, Length = 45 seconds, Resolution = 400 x 448
Radius Cinepak Intel Indeo
video R3.2
Intel Indeo
video 4.5
Indeo video
5.10
Microsoft
Video 1
Average Y = 15.7287
U = 24.7749
V = 40.529
Y = 13.118
U = 13.5661
V = 29.7467
Y = 34.9369
U = 29.6101
V = 43.7315
Y = 49.7683
U = 29.5921
V = 43.9668
Y =46.7505
U =30.7454
V =45.3311
Standard
Deviation
Y=0.696676
U=0.626488
V=0.628696
Y=0.279379
U=0.0473019
V=0.0485187
Y=0.544054
U = 0.81499
V=0.563606
Y= 0.67752
U=0.810964
V=0.613135
Y=0.927657
U=0.843353
V = 0.65692
Table 5 Quiz 02 Video: - Frames = 1178 Frames, Length = 49 seconds, Resolution = 192 x 144
Cinepak
Radius
Intel Indeo
video R3.2
Intel Indeo
video 4.5
Indeo video
5.10
Microsoft
Video 1
Average Y =16.8062
U =27.8529
V =41.8063
Y =14.0409
U =13.6811
V =29.9228
Y = 32.672
U = 36.009
V = 46.6161
Y = 47.8617
U = 35.9763
V = 46.9077
Y = 42.7775
U = 39.2559
V = 55.2206
Standard
Deviation
Y =0.26265
U=0.230529
V=0.139003
Y=0.103465
U=0.0179045
V=0.0202713
Y=0.488761
U=0.271639
V=0.102116
Y=0.354362
U=0.266689
V= 0.18454
Y = 0.24554
U=0.0500527
V = 0.272689
Table 6 Race Video:- Frames =392, Length =16 seconds, Resolution = 352 x 240
Radius
Cinepak
Intel Indeo
video R3.2
Intel Indeo
video 4.5
Indeo video
5.10
Microsoft Video 1
Average Y=16.2319
U=30.1547
V=33.2701
Y=10.9564
U=29.3925
V=30.6536
Y=44.3539
U=44.2083
V=47.3081
Y = 45.8188
U = 44.4679
V = 47.5591
Y = 38.9927
U = 46.9736
V = 47.7328
Standard
Deviation
Y=1.89294
U=4.91656
V=4.56857
Y=1.35758
U=1.49487
V=1.08814
Y=2.16613
U=1.32188
V = 1.3906
Y= 1.45584
U = 1.14082
V=0.952739
Y = 0.349535
U = 0.311646
V = 0.252711
10. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
10
Table 7 Alien Eye Video:- Frames =382, Length =15 seconds, Resolution = 352 x 240
Radius
Cinepak
Intel Indeo
video R3.2
Intel Indeo
video 4.5
Indeo video
5.10
Microsoft Video
1
Average Y=17.0744
U=37.0833
V=38.4051
Y=11.8769
U=37.4427
V=37.5588
Y = 41.2606
U = 46.9533
V = 48.6508
Y = 42.4673
U = 46.8702
V = 48.4842
Y = 36.5074
U = 46.2448
V = 45.6937
Standard
Deviation
Y=2.77705
U=2.17072
V=2.36064
Y=2.76043
U =2.2202
V =2.3764
Y=1.64291
U=1.16645
V=0.976556
Y=1.19747
U =1.08711
V=0.922248
Y = 0.492858
U = 0.216027
V = 0.268167
5.1. Analysis and Interpretation of Results
The codec used successfully compressed the files to less than 50% the original file sizes and in
most cases up to 70% less in size to the reference video. In terms of file video file sizes from
smallest to largest codec the results were Intel Indeo video R3.2, Indeo video 5.10, Intel Indeo
video 4.5, Radius Cinepak and then Microsoft Video 1. Although there was not much significant
difference in the files sizes of the compressed videos except for Microsoft Video 1 codec, which
was much larger in size. In terms of the PSNR, the values for Y,U and V from highest to lowest
codec were Indeo video 5.10, Intel Indeo video 4.5, Microsoft Video 1, Radius Cinepak and
finally Intel Indeo video R3.2.
6. CONCLUSIONS
The results of the experiment show that the codec used produced variable results but were
consistent over the sets of videos tested. From the experiment result Indeo video 5.10 had the
highest PSNR values and the second smallest file sizes and therefore the codec is recommended
best for AVI file format.
PSNR does not usually correlate well with the other subjective quality measurements all the time.
This is because changes in pixel level particularly in the U and V chrominance are not always
detected by the human visual system and can be ignored. Therefore another solution to give more
realistic results would be to calculate the luma (Y). This is because luma is the best channel for
detecting errors that the human eye is most likely to perceive. Also, by analyzing the luma
channel directly. We avoid any PSNR error that may occur in YUV to RGB conversion; therefore
we get a more accurate measure of the codec’s quality performance [19].
An alternative to YUV is the RGB (red/green/blue) color space, where three numbers for the
proportions of the three primary colors red, green and blue represent each pixel. The human eye is
more sensitive to brightness than color, which makes RGB not very effective [4]. Each color
component picture is partitioned into 8 x 8 pixel blocks of picture samples. Instead of coding each
block separately, H.261 groups 4 Y blocks, 1 U block and 1 V block together. MPEG-4
compression algorithm was designed to address the need for higher picture and increased system
flexibility. Similar to H.261 as only the YUV color components separation is allowed by the
standard, but unlike H.261 the frame size is not fixed. The numbers indicates the brightness or
luminance, and a larger number means a brighter sample. For a sample represented by n bits, 0
would represent black and (2^n-1) represents white [4]. 8 bits per sample is usually used to
represent luminance for videos such as those used for the experiment.
11. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
11
ACKNOWLEDGEMENT
We would like to acknowledge the University of Hertfordshire Multimedia Laboratory and the
staff for their assistance and faculties used for conducting this research.
REFERENCES
[1] M Freytes, C.E Rodriguez and C.A Marques, 2001. “Real-Time H.263+ Video Transmission on
802.11 Wireless LAN”, IEEE SSAC.
[2] I.E.G Richardson, 2003. “H.264 and MPEG-4 Video Compression: Video Coding for Next-
Generation Multimedia”. Wiley & Sons Publication.
[3] Video Compression Tutorial, http://www.wave-report.com/tutorials/VC.htm
[4] I.E.G Richardson, 2002. “Video Codec Design: Developing Image and Video Compression Systems”.
Wiley & Sons Publication.
[5] A.H Sadka, 2002. “Compressed Video Communications”. Wiley & Sons Publication.
[6] Array Microsystems, 1997, Inc Video Compression white paper.
[7] Z Wang, and A.C Bovik, March 2002. “A universal image quality index”. IEEE Signal Processing
Letters, vol. 9, no. 3, pp. 81-84.
[8] G.J Sullivan and T Wiegand, January 2005. “Video Compression: From concepts to the H.264/AVC
Standard”. Proceedings of the IEEE volume 93.
[9] S Han and B Girod, 2002. “Robust and Efficient Scalable Video Coding with Leaky Prediction”.
International Conference on Image Processing volume 2
[10] Z Wang, H.R. Sheith and A.C Bovik, September 2003. “Objective Video Quality Assessment”,
Chapter 41 in the Handbook of Video databases: Design and Applications.
[11] J F McGowan, 2003. AVI Overview: Audio and Video Codec, available at
http://www.jmcgowan.com/
[12] Cinepak CODEC, http://www.csse.monash.edu.au/~timf/videocodec.html
[13] Intel Indeo R3.2 and 4.5 CODEC,
http://www.siggraph.org/education/materials/HyperGraph/video/codecs/Indeo.html
[14] Indeo Video 5.10 CODEC, http://developer.intel.com/ial/indeo/index.htm
[15] B Hong, 2002, Introduction to Multimedia Technology, Multimedia Communication Laboratory,
University of Texas, Dallas.
[16] M Knee, 2001. “A robust, effective and accurate single-ended picture quality measure for MPEG-2”,
available at http://www.ext.crc.ca/vqeg/frames.html
[17] M Melanson, March 13, 2003. Description of the Microsoft Video-1 Decoding Algorithm, volume
1.2.
[18] VQEG: The Video Quality Experts Group, http://www.vqeg.org/
[19] Quality Analysis Methodology,
http://www.cineform.com/technology/HDQualityAnalysis10bit/HDmethodology10bit.htm
[20] D.Vatolin et al, 2005. “MPEG-4 SP/ASP Video Codecs Comparison” CS MSU Graphics and Media
Lab Video Group
[21] Thomos, N., Boulgouris, N. V., & Strintzis, M. G. January 2006. “Optimized Transmission of
JPEG2000 Streams Over Wireless Channels”. IEEE Transactions on Image Processing , 15 (1).
[22] Series of Doom9 codec comparisons, http://www.doom9.org/codec-comparisons.htm
[23] E.Ohwovoriole and Yiannis Andreopoulos, 2010. “Rate-Distortion Performance of Contemporary
Video Codecs: Comparison of Google/WebM VP8, AVC/H.264 and HEVC TMuC, University
College London.
[24] John McGowan, AVI Overview: Audio and Video Codecs, available at
http://www.jmcgowan.com/avicodecs.html#Codec
[25] Video Quality Studio 0.4 RC3, http://www.everwicked.com/vqstudio/VideoQualityStudio-
0.4RC3.exe
[26] B Girod, N. Faerber and E. Steinbach, 1996. “Standards Based Video Communications at very Low
Bit rates”. European Signal Processing Conference (EUSIPCO) Proceedings.
[27] H Wu, Z Yu, S Winkler and T Chen, October 2000. “A multi-metric objective picture quality
measurement model for MPEG video. IEEE Trans. Circuits and Systems for video technology.
12. The International Journal of Multimedia & Its Applications (IJMA) Vol.7, No.6, December 2015
12
[28] Intel Indeo R3.2 and 4.5,
http://www.siggraph.org/education/materials/HyperGraph/video/codecs/Indeo.html
[29] J Ostermann, J Bormans, D List, D Marpe, M Narroschke, F Pereira, T Stockhammer and T Wedi,
2004. “Video coding with H.264/AVC: Tools, performance and complexity”. IEEE Circuits and
Systems Magazine.
[30] K.T Tan and Ghanbari, October 2000. “A multi-metric objective picture quality measurement model
for MPEG video” IEEE Trans CSUT.
[31] D Stracham, February, 1996. “Video Compression”, Society of Motion Picture & Television
Engineers (SMPTE) Journal 68-73.
[32] A.B Watson, J.Hu, J.F. McGowan III and J.B.Mulligan, 1999. Design and performance of a digital
video quality metric, Human Vision and Electronic Imaging IV. Proceedings 3644, SPIE:
Bellingham, WA, page. 168–174.
AUTHORS
Suleiman Mustafa presently works with National Agency for Science and Engineering
Infrastructure (NASENI), Abuja Nigeria as a Program Analyst since 2010. He holds a
Master’s degree in Multimedia Technology (2006) and a Bachelor’s degree in
Computer Science (2004) both from the University of Hertfordshire, UK. He is a
member of the Nigeria Computer Society (NCS) and Computer Professionals of Nigeria
(CPN).
Dr Hannan Xiao received her PhD from the Department of Electrical & Computer
Engineering, the National University of Singapore in 2003, and B.Eng and M.Eng
degrees from the Department of Electronics & Information System Engineering, the
Huazhong University of Science & Technology, China. Dr Xiao has been with the
University of Hertfordshire as a lecturer/senior lecturer since October 2003.