This document discusses three methods for reducing the bit-rate of transmitted video streams: 1) Time-shifting of MPEG-2 packets, which smooths out variable bit-rates without changing individual encoding rates; 2) Open loop transrating, which uses encoding tools to recompress streams at lower rates in a non-reversible way; 3) Closed loop transrating, which iteratively adjusts rates using feedback to maintain quality. These techniques help network operators optimize bandwidth usage and revenues by controlling streaming rates to match infrastructure limits and service pricing models.
Latency Considerations in LTE: Implications to Security GatewayTerry Young
Network latency, even more than download speeds, directly impacts the user experience and bottom line revenue for on-line businesses. In high frequency financial market trading, microsecond improvements are considered a competitive advantage. The latency contribution of all individual network elements, including the security gateway, must be carefully calculated. In LTE-A and especially for the X2 interface where the latency targets are drastically reduced, an additional 200 μs delay is a significant difference. This paper discusses how the security gateway can provide additional security without jeopardizing the latency budget.
Minimizing network delay or latency is a critical factor in delivering mobile broadband services; businesses and users expect network response will be close to instantaneous. Excess latency can have a profound effect on user experience—from excess delay during a simple phone conversation, reducing throughput at edge of cell coverage areas by reducing effectiveness of RAN optimization techniques, to slow- loading webpages and delays with streaming video. Response delays negatively impact revenue. In financial institutions, low latency networks have become a competitive advantage where even a few extra microseconds, can enable trades to execute ahead of the competition.
The direct correlation between delay and revenue in the web browsing experience is well documented. Amazon famously claimed that every 100 millisecond reduction in delay led to a one percent increase in sales. Google also stated that for every half second delay, it saw a 20 percent reduction in traffic.
For LTE network operators, control of latency is growing in importance as both an operational and business issue. Low latency is not only critical to maintaining the quality user experience (and therefore, the operator competitive advantage) of growing social, M2M, and real-time services, but latency reduction is fundamental to meeting the capacity expectations of LTE-A, where latency budgets will be cut in half and X2 will need to perform at microsecond speed.
Total network latency is the sum of delay from all the network components, including air interface, the processing, switching, and queuing of all network elements (core and RAN) along the path, and the propagation delay in the links. With ever tightening latency expectations, the relative contribution of any individual network element, such as a security gateway, must be minimized. For example, when latency budgets were targeting 150ms, a network node providing packet processing at 250μs was only adding 0.17% to the budget. However, in LTE-A, with latency targets slashed to 10ms, that same network node will consume almost 15x more of the budget. More important, when placed on the S1 with a target of only 1ms, 250 μs is 25% of the entire S1 latency allocation, and endangers meeting the microsecond latency needed at the X2. Clearly, operators need to apply stringent latency requirements for all network nodes, when designing LTE and LTE-A networks.
Survey paper on Virtualized cloud based IPTV Systemijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Latency Considerations in LTE: Implications to Security GatewayTerry Young
Network latency, even more than download speeds, directly impacts the user experience and bottom line revenue for on-line businesses. In high frequency financial market trading, microsecond improvements are considered a competitive advantage. The latency contribution of all individual network elements, including the security gateway, must be carefully calculated. In LTE-A and especially for the X2 interface where the latency targets are drastically reduced, an additional 200 μs delay is a significant difference. This paper discusses how the security gateway can provide additional security without jeopardizing the latency budget.
Minimizing network delay or latency is a critical factor in delivering mobile broadband services; businesses and users expect network response will be close to instantaneous. Excess latency can have a profound effect on user experience—from excess delay during a simple phone conversation, reducing throughput at edge of cell coverage areas by reducing effectiveness of RAN optimization techniques, to slow- loading webpages and delays with streaming video. Response delays negatively impact revenue. In financial institutions, low latency networks have become a competitive advantage where even a few extra microseconds, can enable trades to execute ahead of the competition.
The direct correlation between delay and revenue in the web browsing experience is well documented. Amazon famously claimed that every 100 millisecond reduction in delay led to a one percent increase in sales. Google also stated that for every half second delay, it saw a 20 percent reduction in traffic.
For LTE network operators, control of latency is growing in importance as both an operational and business issue. Low latency is not only critical to maintaining the quality user experience (and therefore, the operator competitive advantage) of growing social, M2M, and real-time services, but latency reduction is fundamental to meeting the capacity expectations of LTE-A, where latency budgets will be cut in half and X2 will need to perform at microsecond speed.
Total network latency is the sum of delay from all the network components, including air interface, the processing, switching, and queuing of all network elements (core and RAN) along the path, and the propagation delay in the links. With ever tightening latency expectations, the relative contribution of any individual network element, such as a security gateway, must be minimized. For example, when latency budgets were targeting 150ms, a network node providing packet processing at 250μs was only adding 0.17% to the budget. However, in LTE-A, with latency targets slashed to 10ms, that same network node will consume almost 15x more of the budget. More important, when placed on the S1 with a target of only 1ms, 250 μs is 25% of the entire S1 latency allocation, and endangers meeting the microsecond latency needed at the X2. Clearly, operators need to apply stringent latency requirements for all network nodes, when designing LTE and LTE-A networks.
Survey paper on Virtualized cloud based IPTV Systemijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...Cisco Service Provider
Reprinted with permission of NCTA, from the 2014 Cable Connection Spring Technical Forum Conference Proceedings. For more information on Cisco video solutions, visit: http://www.cisco.com/c/en/us/products/video/index.html
Multicasting Of Adaptively-Encoded MPEG4 Over Qos-Cognizant IP NetworksEditor IJMTER
we propose a novel architectural planning for multicasting of adaptively-encoded
layered MPEG4 over a QoS-aware IP network. We re-quire a QoS-aware IP network in this case to
(1) Support priority dropping of packets in time of congestion. (2) Provide congestion notification to
the multicast sender. For the first requirement, we use RED's extension for service differentiation. It
recognizes the priority of packets when they need to be dropped and drops lower priority packets
first. We couple RED with our proposal for the second requirement which is the adoption of
Backward Explicit Congestion Notification (BECN) for use with IP multicast. BECN will provide
early congestion notification at the IP layer level to the video sender. BECN detects upcoming
congestion based on size of the RED queue in the routers. The MPEG4 adaptive-encoder can change
the sending rate and also can divide the video packets into lower priority packets and high priority
packets. Based on BECN messages from the routers, a simple flow controller at the sender sets the
rate for the adaptive MPEG4 encoder and also sets the ratio between the high priority and low
priority packets within the video stream. We use a TES model for generating the MPEG4 traffic that
is based on real video traces. Simulation results show that combining priority dropping, MPEG4
adaptive encoding, and multicast BECN: (1) Improves bandwidth utilization (2) Reduces time to
react to congestion and hence improves the received video quality (3) Maintains graceful degradation
in quality with congestion and provides minimum quality even if congestion persists.
An Overview on Multimedia Transcoding Techniques on Streaming Digital Contentsidescitation
The current IT infrastructure as well as various
commercial applications are directly formulated based on
deployment in multimedia system e.g. education, marketing,
risk management, tele-medicines, military etc. One of the
challenges found in using such application is to deliver
uninterrupted stream of video between multiple terminals
e.g. smart-phone, PDAs, laptops, IPTV etc. The research shows
that there is a stipulated need of designing novel mechanism
of bit rate adjustment as well as format conversion policy so
that the source stream may stream well in diverse end devices
with multiple configuration of processor, memory, decoding
etc. This paper discusses various eminent points from
literature that will throw better highlights in understanding
a schema of direct digital-to-digital data conversion of one
encoding to another termed as transcoding. Although
multimedia transcoding has covered more than a decade in
the area of research, but unfortunately, there is a huge trade-
off between the application, service, resource constraint, and
hardware design that gives rise to QoS issues.
H2B2VS (HEVC hybrid broadcast broadband video services) – Building innovative...Raoul Monnier
Broadcast and broadband networks continue to be separate worlds in the video consumption business. Some initiatives such as HbbTV have built a bridge between both worlds, but its application is almost limited to providing links over the broadcast channel to content providers’ applications such as Catch-up TV services. When it comes to reality, the user is using either one network or the other.
H2B2VS is a Celtic-Plus project aiming at exploiting the potential of real hybrid networks by implementing efficient synchronization mechanisms and using new video coding standard such as High Efficiency Video Coding (HEVC). The goal is to develop successful hybrid network solutions that enable value added services with an optimum bandwidth usage in each network and with clear commercial applications. An example of the potential of this approach is the transmission of Ultra-HD TV by sending the main content over the broadcast channel and the required complementary information over the broadband network. This technology can also be used to improve the life of handicapped persons: Deaf people receive through the broadband network a sign language translation of a programme sent over the broadcast channel; the TV set then displays this translation in an inset window.
One of the most important contributions of the project is developing and testing synchronization methods between two different networks that offer unequal qualities of service with significant differences in delay and jitter.
In this paper, the main technological project contributions are described, including SHVC, the scalable extension of HEVC and a special focus on the synchronization solution adopted by MPEG and DVB. The paper also presents some of the implemented practical use cases, such as the sign language translation described above, and their performance results so as to evaluate the commercial application of this type of solution.
The Optimization of IPTV Service Through SDN In A MEC Architecture, Respectiv...CSCJournals
The aim of this paper is to present the ‘Power’ of SDN Technology and MEC Technic in improving the delivering of IPTV Service. Those days, the IPTV end –users are tremendous increased all over the world , but in the same time also the complains for receiving these prepaid real time multimedial services like; high latency, high bandwidth, low performance and low QoE/QoS. On the other end, IPTV Distributors need a new system, technics, network solutions to distribute content continuesly and simultaneously to all active end-users with high-quality, lowlatency and high Performance, thus monitoring and re-configuring this ‘Big Data’ require high Bandwidth by causing difficult problems by offering it affecting in the same time the price and QoE/QoSperformance of delivered service.
For this reason, we have achieved to optimize the IPTV service by applying SDN solution in a MEC Architecture (Multiple-Access Edge Computing). In this way , through MEC Technology and SDN, it is possible to receive an IPTV service with Low Latency, High Performance and Low Bandwidth by solving successfully all the problems faced by the actual IPTV Operators. These improvements of delivering IPTV service through MEC will be demonstrated by using the OMNet +++ simulator in an LTE-A mobile network. The results show clearly that by applying the MEC technique in the LTE-A network for receiving IPTV Service through SDN Network, the service was delivered with latency decreased by >90% (compared to the cases when the MEC technique is not applied), with PacketLoss of almost 0 and with high performance QoE. In addition these strong Contributions, the ‘Big’ innovation achieved in this work through simulations is that the quality of delivered IPTV Service did not change according to the increasing of the end-users.This latency of delivering the video streaming services did not change. This means that the IPTV Service providers will increase their benefits by ensuring in the same time also the delivering of service with high quality and performance toward innumerous end users. Consequently, MEC Technology and SDN solution will be the two right and "smart" network choices that will boost the development of the 5th Mobile generation and will significantly improve the benefit of Video Streaming services offered by current providers worldwide (Netflix, HULU, Amazon Prime, YouTube, etc).
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
IBCBarconetTransratingEfficiency
1. TRANSRATING EFFICIENCY — BIT-RATE REDUCTION
METHODS TO ENABLE MORE SERVICES OVER EXISTING
BANDWIDTH
L. Lastein and B. Reul
BarcoNet – a Scientific Atlanta Company, Denmark
ABSTRACT
This paper explains the efficiency of three transrating and statistical re-
multiplexing techniques, examines how they work and describes the
combinations most suitable for different real-life applications in Digital
Headends, Play-outs and regional hubs in Broadcast, Terrestrial, Satellite
and Cable Networks.
The three techniques are known as:
• Time-shifting of MPEG-2 packets
• Open loop transrating
• Closed loop transrating
INTRODUCTION
The introduction of MPEG-2 audio and video encoding has created a multitude of new
potential revenue streams. One of the major benefits for a network operator is the
compression of both video and audio to fit into a significantly lower bandwidth. The higher
the cost of the accessible bandwidth, the greater the benefit gain from compression. But the
technology has also meant new challenges for Digital TV (DTV) networks, be it on cable,
satellite or wireless platforms. The parameters used to encode services by the originator
may not fit the business model of the DTV operator. Also, technical issues arising from more
efficient encoding technologies (such as statistical multiplexing) create a demand for
controlling the most important MPEG-2 parameter of them all: The video encoding rates.
What is the problem?
As MPEG-2 compression technology has matured and improved its bandwidth usage
efficiency, one of the major breakthroughs for encoding has been the deployment of
statistical multiplexing. Video programs can now be encoded using a variable bit-rate
according to the actual needs of the specific program compared to other programs in the
same multiplex. When combined with the typical business scenario from a DTV-operator, a
problem arises. The operator has to aggregate existing content to a digital network, reaching
the subscribers. When this content is acquired from different networks and re-multiplexed to
new service bouquets, the total bit-rate of the newly created multiplexes are out of control.
The operator needs to build in a margin to allow for the fluctuating bit-rate and avoid a
network breakdown in the event that the total bandwidth capacity is exceeded.
A relatively new group of players in this market are broadband operators, who target
alternative broadband access methods to the home. Very rapid deployments of xDSL
connections in different regions of the world have enabled new players to enter the market
of other network operators. To increase the number of potential customers they are offering
video-services to be delivered on a broadband connection. However, the bandwidth
limitations of the “last-mile” to the subscriber is a challenge – only single services at a time
2. can be streamed and connected to the subscriber PC or Ethernet-based Set Top Box (STB).
Often the access-network allows only a limited bandwidth, meaning that re-compression of
the individual program stream is necessary. The purpose is to limit all programs to a fixed,
defined bit-rate that may have no relation to the bit-rate used when these services are
presented to the broadband provider.
Optimize the revenue on the existing bandwidth
Network operators offer different content based on unique business models. The
prioritization of the scarce bandwidth does not necessarily match the priorities already
allocated by the content providers using MPEG-2 compression for their distribution. The
operator must allocate the bandwidth resource to the services where he makes money. This
has particular relevance for bandwidth-scarce applications like DVB-T and DVB-S where
clear prioritizations are needed. For high-revenue earning services, the perceived quality by
the subscriber should be high, and for the low-revenue earners there must be compromises
when selecting the quality level.
Using Transrating to create the optimized Business-model for digital services
Controlling a video rate is conceptually simple. It can be achieved by, first decoding, and
then again re-encoding each service. The encoding can be set at a lower rate.. A decoder
and encoder for each service can quickly runs up to a large investment.
In addition, re-encoding does not always result in the best video quality. MPEG-2 encoding
is based on reducing the details in the picture that are less visible to the human eye.
However, when this technology is re-applied on a previously encoded video, artifacts
introduced to the video will be enhanced. Re-encoding will, in these applications, not be in a
position to take advantage of information about previous encoding processes in the
transmission chain.
A solution to this problem is the technology of transrating. This technology results in a
continued high video quality level by effectively re-using information about previous
encoding whilst at the same time delivering a more cost effective solution to the problem. .
The following types of applications may benefit from using transrating.
- Re-multiplexing VBR streams: Relevant for both DVB-T, DVB-S and cable
environments. Single MPEG-2 programs encoded as a part of a statistical multiplex
may vary from 2 to 10 Mbit/s. Transrating will ensure that the operator does not have
to reserve expensive mainly unused overhead in the downstream network.
- Reducing Constant Bit Rate (CBR) streams. Still some services are distributed in
high CBR-rates that will not be cost-efficient to aggregate and re-transmit in a digital
network without lowering the rates. Relevant for all DTV applications.
- A mix of the two applications mentioned above (all applications), where VBR-services
are re-multiplexed with CBR-services. All rates are reduced and statistically
multiplexed in order to achieve better network utilization.
- IP-Streaming of MPEG-2 services: Rate-limiting for single service, enabling a CBR on
a specified bit-rate, typically significantly lower than both average rates of incoming
CBR or VBR-programs.
- Ensuring compliance with Service Level Agreements between network operators and
content providers by limiting/controlling the rates of single programs re-transmitted
over a DTV-network.
3. THE TECHNOLOGIES OF TRANSRATING
Transrating focuses on reducing the rate of a single service and/or a multiplex of services. A
service consists of many elements, such as SI/PSI data, audio, VBI, other data and finally
video. Video is seen from a bit-rate perspective as the largest consumer of bandwidth.
Reducing video-rates is the only real alternative that will have a significant effect on the total
service bandwidth. Transrating works specifically by reducing the video encoding rates and
it does affect the rest of the components in a service. A transrating device typically makes
use of one of two core processes. The first one, “time-shifting of MPEG-2 packets,” involves
the “smoothing” of the total Transport Stream rate without altering the actual video encoding
rates of the individual services. The second process is where rate reduction occurs for
individual services.
Time-shifting of MPEG-2 data packets
One of the main problems faced is the re-multiplexing of
services, previously encoded using Variable Bit-Rates (VBR). A
relatively easy way around this is to advance and delay MPEG-
2 packets in order to “equalize” the bit-rate level of the total
stream. Figure 1 shows five different services, encoded using
variable bit-rates. Horizontal arrows show how MPEG-2
packets can be advanced or delayed in the time-domain to
decrease the peaks.
Of course, there are limits as to how much such a process can
reduce the rate-peaks. These limits are defined by the Virtual
Buffer Verifier specifications (VBV) (1) in the MPEG-2
compression standard. The VBV specifies how much data the Figure 1 - Time-shifting of
decoder of the programs must be able to store in the local data-packets in an
memory before it converts the data into base band video. It is MPEG-2 VBR stream
based on the fact that data must arrive at the decoder before it
has to be removed from the buffer for decoding.
This type of technology is only really effective in two types of applications. Single programs
must be VBR encoded, otherwise there is no point in shuffling the data packets. If a number
of programs use high rates simultaneously, shifting does not help. Time shifting only lowers
the maximum rate, not the average rate. Because the transrater must have finite delay and
cannot send packets it has not received, the maximum time that a packet can be advanced
is limited.
Time-shifting technology does not guarantee any output bandwidth. Also it does not help the
operator to adjust the bandwidth-consumption of the services to where the potential profit is
most likely to be found. Nevertheless it works well for the preparation of VBR-feeds for the
process of transrating, which will be explained next.
Transrating – Open Loop
The tools of rate-reduction are revealed when looking into the fundamentals of the encoding
process. A typical encoder can compress the video to a digital bit-rate specified by the
operator. The same tools can be used for transrating and therefore the basic processes of
encoding need to be explained.
4. The basics of MPEG-2 video encoding
The major steps in MPEG-2 encoding are:
- Dividing the pixels into macro-blocks of 8x8 pixels
- Perform motion compensation by identifying temporal
redundancy.
- Find spatial redundancy with DCT-encoding. Until now no
reduction of the video information has been made and the
process is therefore reversible.
- The reduction of the information so far generated is done
with the quantization. This process removes details in the
picture, which is often in the high frequencies. In areas
with high spatial activity (pencil drawings, letters in a book
or other high contrast areas) the details are both high and
low frequency, and low frequencies will be reduced. The
logic of this is based on the limited perception of the
human brain, which filters out this information. The
quantification removes content in the picture, which will
give the user none or limited perception of degradation
anyway.
- Variable Length Encoding is the process of reducing the
mathematical redundancy.
The output of the variable length encoding is the core content of
the video in the transport stream. In addition to these principles, Figure 2 - The MPEG-2
each picture is treated as either an I, P or B-frame which will be Encoding process
explained below.
The schematics of open loop transrating
The obvious way to obtain actual rate-reductions on a video-service is to perform re-
quantization. The process of quantization is (as seen in Figure 2) integral to MPEG-2
encoding. When a transrater device performs this process, it partially inverse encodes,
including the process of inverse quantization, and then decides on an increased level of
quantization, resulting in higher compression rates. On a flow chart this can be explained
like Figure 3 below.
Figure 3 - The Open Loop Algorithm
Such a transrating process reduces the bit-rate of the video, but not without trade-offs in the
video quality. The problem is this relatively simple algorithm works in an “open loop” without
any information about what kind of effect it has on the image while transrating the individual
pictures in the stream. When making a transrater device using this algorithm, a defensive
rate-reduction strategy must be followed to avoid a picture breakdown from visual artifacts in
the video. This will mean limitations to the device, since all streams carry different content
5. encoded by different encoders. It will only be capable of a certain degree of rate-reduction.
The consequences for exceeding the maximum level for quantification will be very easy to
see for the end-consumer – the subscriber. The complex referenced structure of the MPEG
stream may cause programs to break up and show severe artifacts for up to a half second.
The risks when reducing the rates
The problems of open loop transrating can be explained based on the basic terminology of a
transport stream, the so-called Group of Pictures (GOP). The stream consists of I, B and P-
pictures, typically in a syntax as seen below on Figure 4, showing a 15,2 GOP (15 pictures
in total, 2 B-pictures between each P or I frame). The arrows indicate how the P-pictures are
related to either the previous P-picture or the I-picture. The B-pictures are related to the I or
P picture before it and, if present, the P-picture after.
Figure 4 - The structure of the GOP-sequence anchor picture references
As mentioned previously, the re-quantification removes information and compromises the
picture quality. This will be referred to as introducing an error. Depending on the GOP-
sequence, there is a difference where this error is introduced. Errors introduced on B-
pictures will have little effect. It will only be shown for 1 or 2 frames, out of 25 frames during
one second. If the error is a block-artifact, the visual impact will be a short pulse. The
situation is however more serious if errors are applied on the P or I-pictures.
The I and the P-pictures are the anchor-pictures of the GOP, since the following P-pictures
always refer to the previous I or P-picture. So, if there is an error applied to one of the early
P-pictures in the GOP – or even the I-picture, which is starting the GOP – the error
continues throughout the GOP-sequence. If introduced on an I-picture, the block artifact will
be present in the video for approximate half a second, depending on the length of the GOP.
This problem for the video quality, can also been seen in other ways.
If an error on the I-picture is introduced due to re-quantization, it is easy to imagine that
errors also can be applied to the following P-pictures. The result is shown in Figure 5.
Figure 5 - The development of the error throughout the GOP (error-drifting)
Simply put, one bad decision at the start of the GOP is compounded by another one. The
sum of these errors, “breathing”, is perceived as degradation of picture quality throughout
the GOP, until a new GOP starts with an I-frame. The cycle repeats every half a second.
Other factors that influence the quality level of Open Loop Transrating are mainly related to
how the actual decisions on the level of re-quantification are made, mainly on the B-pictures.
6. How to apply open loop transrating
Open loop transrating is a method of actually lowering the bit-rate of video services, that
should be followed by the time-shifting of the MPEG-2 data packets.
Since I and P pictures are risky to re-quantify, the transrater device could limit its processing
range to only include B-pictures. That will prevent any major introduction of artifacts in the
decoded video. It will have a rate-reducing effect since most of the pictures in the stream are
B-pictures. On the other hand, the biggest pictures by far in the GOP – the ones carrying
most information – are the I and P pictures. Only very limited re-quantification can be done
on these pictures in order to avoid severe block artifacts in the decoded video.
In conclusion, the typical rate-reduction made by transrater devices utilizing open loop
transrating is app 10-20%, depending on how the GOP previously has been encoded. As an
example, better quality encoders have better motion-estimation performance, meaning that
the B-pictures already will be small in size. In that case, the benefit of transrating is reduced.
Since this does not fulfill all the business-objectives mentioned in the introduction, another
type of transrating can be applied, solving these problems by enabling a higher level of rate-
reduction. This is in this paper referred to as “Closed Loop Transrating”.
TRANSRATING - CLOSED LOOP
The name of this kind of transrating indicates that the main differentiator from the open loop
type is a “learning” loop. This algorithm aims to overcome the problems of the Open Loop
Transrating, which in effect means that the total rate-reductions of a stream will be greater.
The schematics of Closed Loop Transrating
Closed Loop is built on the Open Loop schematics, but it has two “learning” loops added:
- First of all, the re-quantification of the single macro-blocks is done based on
measurements of the output picture quality for the single picture. In order to optimize
the level of re-quantification, the first step is to actually measure the error applied.
Secondly, the error detection must have a valid basis. It is very difficult to determine.
Figure 6 - The Closed Loop Transrating algorithm. Based on Assuncao et al (2)
7. Compared to the Open Loop Transrating algorithm it can be seen in Figure 6 that the basic
process is unchanged, but that several loops have been added. The box named “REF” is
the reference picture comparison between the previous I or P picture and the picture in
question. The function of REF is to store the error, which is feed back into the transrating
process for the next anchor picture.
Note that this process also makes use of both DCT encoding and decoding (DCT encoding
and inverse DCT encoding). According to the physics of the video encoding scheme
described previously, this takes the MPEG-2 stream almost back to the pixel-domain. This is
done to ensure use of the reference-picture quality measurements. The error applied on the
actual anchor picture is known and transferred into the process of re-quantifying the next
anchor picture. In this way the re-quantification of the anchor picture will be able to
compensate for the error.
Although this is more complicated, it does solve some of the problems occurring when Open
Loop Transrating is applied. It cannot change the fact, that reducing the picture size will
reduce the quality. This process is however capable of, first of all, knowing the exact level of
error that has been introduced on each frame and second , compensating for this error in
the next anchor picture.
Figure 7 shows how the Closed Loop Transrating algorithm overcomes the risk of breathing,
since it monitors the error included on a frame-by-frame reference.
Figure 7 - Eliminating the error developing throughout the GOP
Even though errors still are introduced in each picture, the error will be compensated for in
the next anchor picture. As a consequence, the total error throughout the GOP-sequence
will not exceed the error introduced in the first anchor-picture. That has basically been
determined by the rate-reductions requested by the operator.
In short, the advantage of the Closed Loop Transrating is built on these two processes:
- The level of re-quantifications on the individual anchor picture is being applied while
monitoring the actual errors applied on the picture.
- All errors applied as a natural consequence of the re-quantification are eliminated in
the next anchor-picture, thereby eliminating the risk of an exponentially developing
error throughout the GOP, which could cause video quality problems.
How to apply Closed Loop Transrating
The main advantage of this algorithm compared to the Open Loop Transrating is that it
allows significant rate-reductions of the I and P pictures, without risking severe impairments
in the video quality. The algorithm will, based on the inputs from the user itself, determine
how large the rate-reductions on the I and P pictures can be.
Based on simulations with different types of stream, the Closed Loop Algorithm has shown
up to 50% video bit-rate reductions, without causing significant blocking artifacts on the
8. decoded video image. Compared to the Open Loop algorithm, the further reduction is
achieved on the I and P pictures, while the B pictures remain as compressed as they would
be using conventional Open Loop transrating.
A particular benefit of the Closed Loop algorithm is that it will be able to achieve significant
rate-reductions also on high-quality encoders, which outputs smaller B pictures than
conventional encoders.
It is also clear that the complexity of the algorithm is greater than Open Loop. From an
implementation standpoint that means that there is a need for more powerful hardware. The
“learning” loops require extra processing compared to the Open Loop Transrating. Tests
have shown that if a Closed Loop Transrating process carried out on I and P pictures takes
100% processing power, the Open Loop needs approximately 40-50%. In effect, Closed
Loop Transrating requires twice as much processing hardware as the Open Loop
Transrating algorithm.
CONCLUSION
This paper has presented three different processes relevant to transrating. The conclusion is
as follows:
- Time-shifting of MPEG-2 data packets is a necessary tool for preparing all transrating
processes, as long as the incoming video-streams are in Variable Bit-Rate mode
(VBR). This will reduce the potential peaks when re-multiplexing independently
encoded VBR programs, but it will not guarantee a specific bit-rate and will still
require a margin of error to prevent overflow in the streams coming out of the
headend.
- Open Loop Transrating is based on re-quantification of the video and results in a
rate-reduction of the encoded video streams. Due to the linear process of re-
quantification, the process can only output a certain bit-rate reduction, varying from
10-20% max on common streams.
- Closed Loop Transrating is similar to the Open Loop transrating, but it uses a double
“learning” loop, based on inputs from measurement of the video quality of the
specific, transrated frame and the frames before it. This complex algorithm enables
up to 50% video bit-rate reductions.
Transrating is a strong alternative to decoding/re-encoding programs, which offers cost-
effective rate-reductions with a minimum loss of video quality, especially when Closed Loop
Transrating is applied.
REFERENCES
1. ISO/IEC 13818-2 (1995) Generic Coding of Moving Pictures and Associated Audio
Information: Video. Appendix C: Video Buffering Verifier
2. A.A. Assuncao, Pedro and Ghanbari, Mohammed; Transcoding of MPEG-2 video in the
frequency domain; Department of Electronic Systems Engineering, University of Essex,
IEEE 1997