This document provides an overview of bandwidth estimation in the Janus WebRTC server. It discusses:
- The importance of bandwidth estimation and congestion control for real-time media like WebRTC.
- Challenges in applying existing bandwidth estimation algorithms designed for endpoints (like GCC) to servers that don't generate their own media.
- An approach taken in Janus to develop a simpler, ad-hoc bandwidth estimation technique for servers based on acknowledged rate, losses, and delays - without relying on existing complex standards-track algorithms.
1. Bandwidth Estimation in the
Janus WebRTC Server
Lorenzo Miniero
@lminiero@fosstodon.org
RTC.ON
13th October 2023, Kraków, Poland
2. Who am I?
Lorenzo Miniero
• Ph.D @ UniNA
• Chairman @ Meetecho
• Main author of Janus
Contacts and info
• lorenzo@meetecho.com
• https://fosstodon.org/@lminiero
• https://www.slideshare.net/LorenzoMiniero
• https://lminiero.bandcamp.com
3. Just a few words on Meetecho
• Co-founded in 2009 as an academic spin-off
• University research efforts brought to the market
• Completely independent from the University
• Focus on real-time multimedia applications
• Strong perspective on standardization and open source
• Several activities
• Consulting services
• Commercial support and Janus licenses
• Streaming of live events (IETF, RIPE, etc.)
• Proudly brewed in sunny Napoli(*), Italy
5. What is Bandwidth Estimation? (BWE)
• How much data you can send per unit of time in a session
• Important not to send more than you can...
• ... or more than the network can accomodate!
• ... or more than the receiver can handle!
• Not to be confused with congestion control
• BWE is how you estimate how much you can/should send
• Congestion control is aimed at avoiding and handling network congestion
• Failure to do those can cause problems
• Congestion on the network, failure to deliver data, etc.
6. What is Bandwidth Estimation? (BWE)
• How much data you can send per unit of time in a session
• Important not to send more than you can...
• ... or more than the network can accomodate!
• ... or more than the receiver can handle!
• Not to be confused with congestion control
• BWE is how you estimate how much you can/should send
• Congestion control is aimed at avoiding and handling network congestion
• Failure to do those can cause problems
• Congestion on the network, failure to deliver data, etc.
7. What is Bandwidth Estimation? (BWE)
• How much data you can send per unit of time in a session
• Important not to send more than you can...
• ... or more than the network can accomodate!
• ... or more than the receiver can handle!
• Not to be confused with congestion control
• BWE is how you estimate how much you can/should send
• Congestion control is aimed at avoiding and handling network congestion
• Failure to do those can cause problems
• Congestion on the network, failure to deliver data, etc.
8. Why is all this important for WebRTC?
• In TCP, CC just determines how long delivery will take
• Delivery of same data may take longer or not
• Slowing down or speeding up depending on the congestion window
• With WebRTC, you don’t have the same luxury
• Data is flowing in real-time, and can’t be late!
• Data itself may need to “change” depending on bandwidth and congestion
• Live feedback required to exchange info among peers
• Depending on bandwidth, peer should adapt in real-time to avoid congestion
• How to adapt depends on the implementation (e.g., encoder vs. SFU)
9. Why is all this important for WebRTC?
• In TCP, CC just determines how long delivery will take
• Delivery of same data may take longer or not
• Slowing down or speeding up depending on the congestion window
• With WebRTC, you don’t have the same luxury
• Data is flowing in real-time, and can’t be late!
• Data itself may need to “change” depending on bandwidth and congestion
• Live feedback required to exchange info among peers
• Depending on bandwidth, peer should adapt in real-time to avoid congestion
• How to adapt depends on the implementation (e.g., encoder vs. SFU)
10. Why is all this important for WebRTC?
• In TCP, CC just determines how long delivery will take
• Delivery of same data may take longer or not
• Slowing down or speeding up depending on the congestion window
• With WebRTC, you don’t have the same luxury
• Data is flowing in real-time, and can’t be late!
• Data itself may need to “change” depending on bandwidth and congestion
• Live feedback required to exchange info among peers
• Depending on bandwidth, peer should adapt in real-time to avoid congestion
• How to adapt depends on the implementation (e.g., encoder vs. SFU)
12. A complex problem to solve
• Dedicated Working Group within the IETF
• RTP Media Congestion Avoidance Techniques (RMCAT)
• https://datatracker.ietf.org/wg/rmcat/about/
• A few different algorithms
• SCReAM (Self-Clocked Rate Adaptation for Multimedia)
• https://www.rfc-editor.org/rfc/rfc8298.html
• NADA (Network-Assisted Dynamic Adaptation)
• https://www.rfc-editor.org/rfc/rfc8698.html
• GCC (Google Congestion Control)
• https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc-02
13. A complex problem to solve
• Dedicated Working Group within the IETF
• RTP Media Congestion Avoidance Techniques (RMCAT)
• https://datatracker.ietf.org/wg/rmcat/about/
• A few different algorithms
• SCReAM (Self-Clocked Rate Adaptation for Multimedia)
• https://www.rfc-editor.org/rfc/rfc8298.html
• NADA (Network-Assisted Dynamic Adaptation)
• https://www.rfc-editor.org/rfc/rfc8698.html
• GCC (Google Congestion Control)
• https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc-02
14. How does it work in WebRTC implementations today?
• All libwebrtc-based implementations use GCC
• Feedback exchanged using Transport-wide Congestion Control (TWCC)
• https://datatracker.ietf.org/doc/html/draft-holmer-rmcat-transport-wide-cc-extensions
• Combined usage of RTP extension and RTCP feedback
• Media sender puts global sequence number in RTP extension
• Media receiver sends feedback on multiple RTP packets in ad-hoc RTCP message
• GCC makes use of feedback information to figure out available bandwidth
• Encoder bitrate tweaked on the fly to adapt to the estimate
15. How does it work in WebRTC implementations today?
• All libwebrtc-based implementations use GCC
• Feedback exchanged using Transport-wide Congestion Control (TWCC)
• https://datatracker.ietf.org/doc/html/draft-holmer-rmcat-transport-wide-cc-extensions
• Combined usage of RTP extension and RTCP feedback
• Media sender puts global sequence number in RTP extension
• Media receiver sends feedback on multiple RTP packets in ad-hoc RTCP message
• GCC makes use of feedback information to figure out available bandwidth
• Encoder bitrate tweaked on the fly to adapt to the estimate
16. How does it work in WebRTC implementations today?
• All libwebrtc-based implementations use GCC
• Feedback exchanged using Transport-wide Congestion Control (TWCC)
• https://datatracker.ietf.org/doc/html/draft-holmer-rmcat-transport-wide-cc-extensions
• Combined usage of RTP extension and RTCP feedback
• Media sender puts global sequence number in RTP extension
• Media receiver sends feedback on multiple RTP packets in ad-hoc RTCP message
• GCC makes use of feedback information to figure out available bandwidth
• Encoder bitrate tweaked on the fly to adapt to the estimate
17. Enter the Janus WebRTC Server
Janus
General purpose, open source WebRTC server
• https://github.com/meetecho/janus-gateway
• Demos and documentation: https://janus.conf.meetecho.com
• Community: https://janus.discourse.group/
18. Bandwidth estimation in a WebRTC server
• libwebrtc + GCC work well for endpoints, but what about servers?
• Unless it’s an MCU, the server will not generate media itself
• No way to closely follow BWE with encoder tweaking
• Maybe start from studying how GCC works?
• But draft and paper are severely outdated...
• Current implementation is very different
• Maybe check out NADA too, since it’s the main output of RMCAT?
• It relies on a dedicated RTCP message no one implements, though
• https://www.rfc-editor.org/rfc/rfc8888.html
19. Bandwidth estimation in a WebRTC server
• libwebrtc + GCC work well for endpoints, but what about servers?
• Unless it’s an MCU, the server will not generate media itself
• No way to closely follow BWE with encoder tweaking
• Maybe start from studying how GCC works?
• But draft and paper are severely outdated...
• Current implementation is very different
• Maybe check out NADA too, since it’s the main output of RMCAT?
• It relies on a dedicated RTCP message no one implements, though
• https://www.rfc-editor.org/rfc/rfc8888.html
20. Bandwidth estimation in a WebRTC server
• libwebrtc + GCC work well for endpoints, but what about servers?
• Unless it’s an MCU, the server will not generate media itself
• No way to closely follow BWE with encoder tweaking
• Maybe start from studying how GCC works?
• But draft and paper are severely outdated...
• Current implementation is very different
• Maybe check out NADA too, since it’s the main output of RMCAT?
• It relies on a dedicated RTCP message no one implements, though
• https://www.rfc-editor.org/rfc/rfc8888.html
23. Understanding the problem (with help from friends!)
• SFU != browser
• WebRTC endpoints control the encoder, and can fine tune the output
• SFUs can only work with what they have (e.g., simulcast layers)
• GCC is apparently not a common choice for SFUs
• Many don’t see it work well with the SFU “packing” approach
• Poorly documented, overengineered, and too complicated
Why not something ad-hoc and “easier”, starting from key points?
• Acknowledged rate (estimate of bandwidth from packets the receiver actually got)
• A loss based controller (how you react to packet losses)
• A delay controller (how you predict congestion from delays)
• Probing to figure out if/when you can go up
24. Understanding the problem (with help from friends!)
• SFU != browser
• WebRTC endpoints control the encoder, and can fine tune the output
• SFUs can only work with what they have (e.g., simulcast layers)
• GCC is apparently not a common choice for SFUs
• Many don’t see it work well with the SFU “packing” approach
• Poorly documented, overengineered, and too complicated
Why not something ad-hoc and “easier”, starting from key points?
• Acknowledged rate (estimate of bandwidth from packets the receiver actually got)
• A loss based controller (how you react to packet losses)
• A delay controller (how you predict congestion from delays)
• Probing to figure out if/when you can go up
25. Understanding the problem (with help from friends!)
• SFU != browser
• WebRTC endpoints control the encoder, and can fine tune the output
• SFUs can only work with what they have (e.g., simulcast layers)
• GCC is apparently not a common choice for SFUs
• Many don’t see it work well with the SFU “packing” approach
• Poorly documented, overengineered, and too complicated
Why not something ad-hoc and “easier”, starting from key points?
• Acknowledged rate (estimate of bandwidth from packets the receiver actually got)
• A loss based controller (how you react to packet losses)
• A delay controller (how you predict congestion from delays)
• Probing to figure out if/when you can go up
26. Acknowledged rate
• The first rough estimate we can get from the acknowledged rate
• We know which packets we sent (and their size)
• TWCC RTCP feedback tells us which packets the receiver actually got
• From there, we can get the bitrate of what was received
• Very rough estimate, of course...
• It’s bound to what we sent, which may not be much (or enough)
• Still, a quite important piece of information
• It can give us a number to start from (go up/down from there)
• ... and maybe fallback to when we hit a bump in the road?
27. Acknowledged rate
• The first rough estimate we can get from the acknowledged rate
• We know which packets we sent (and their size)
• TWCC RTCP feedback tells us which packets the receiver actually got
• From there, we can get the bitrate of what was received
• Very rough estimate, of course...
• It’s bound to what we sent, which may not be much (or enough)
• Still, a quite important piece of information
• It can give us a number to start from (go up/down from there)
• ... and maybe fallback to when we hit a bump in the road?
28. Acknowledged rate
• The first rough estimate we can get from the acknowledged rate
• We know which packets we sent (and their size)
• TWCC RTCP feedback tells us which packets the receiver actually got
• From there, we can get the bitrate of what was received
• Very rough estimate, of course...
• It’s bound to what we sent, which may not be much (or enough)
• Still, a quite important piece of information
• It can give us a number to start from (go up/down from there)
• ... and maybe fallback to when we hit a bump in the road?
29. Loss based controller
• Using losses is “easy”
• If your peer is losing too many packets (threshold?), slow down
• Whatever the estimate currently is, it should decrease at that point
• It is a reactive mechanism, though
• You only do something (e.g., decrease estimate) after losses happened
• A video freeze or artifacts may have already taken place at that point
• Besides, losses may not be related to congestion, but something else
• Important not to overreact to occasional or “systemic” losses
• Different ways to get info on losses
• e.g., RTCP Receiver Reports vs. notreceived in TWCC
30. Loss based controller
• Using losses is “easy”
• If your peer is losing too many packets (threshold?), slow down
• Whatever the estimate currently is, it should decrease at that point
• It is a reactive mechanism, though
• You only do something (e.g., decrease estimate) after losses happened
• A video freeze or artifacts may have already taken place at that point
• Besides, losses may not be related to congestion, but something else
• Important not to overreact to occasional or “systemic” losses
• Different ways to get info on losses
• e.g., RTCP Receiver Reports vs. notreceived in TWCC
31. Loss based controller
• Using losses is “easy”
• If your peer is losing too many packets (threshold?), slow down
• Whatever the estimate currently is, it should decrease at that point
• It is a reactive mechanism, though
• You only do something (e.g., decrease estimate) after losses happened
• A video freeze or artifacts may have already taken place at that point
• Besides, losses may not be related to congestion, but something else
• Important not to overreact to occasional or “systemic” losses
• Different ways to get info on losses
• e.g., RTCP Receiver Reports vs. notreceived in TWCC
32. Loss based controller
• Using losses is “easy”
• If your peer is losing too many packets (threshold?), slow down
• Whatever the estimate currently is, it should decrease at that point
• It is a reactive mechanism, though
• You only do something (e.g., decrease estimate) after losses happened
• A video freeze or artifacts may have already taken place at that point
• Besides, losses may not be related to congestion, but something else
• Important not to overreact to occasional or “systemic” losses
• Different ways to get info on losses
• e.g., RTCP Receiver Reports vs. notreceived in TWCC
33. Delay based controller
• Using delays is less intuitive, but quite widespread recently (e.g., BBR)
• You analyze interarrival delay patterns of packets (feedback from the receiver)
• If delays there are higher than the sending ones, it may indicate buffering
• Buffering somewhere on the network is a symptom of incoming congestion
• Much more proactive than losses
• It’s a way to detect potential congestion before it occurs
• We can use that to adapt and avoid the congestion in the first place
• What kind of delay increase should be used as a trigger here, though?
• Again, important not to overreact to occasional delay fluctuations
• Interarrival delays available in TWCC feedback
• Important to also keep track of inter-send delays, though
34. Delay based controller
• Using delays is less intuitive, but quite widespread recently (e.g., BBR)
• You analyze interarrival delay patterns of packets (feedback from the receiver)
• If delays there are higher than the sending ones, it may indicate buffering
• Buffering somewhere on the network is a symptom of incoming congestion
• Much more proactive than losses
• It’s a way to detect potential congestion before it occurs
• We can use that to adapt and avoid the congestion in the first place
• What kind of delay increase should be used as a trigger here, though?
• Again, important not to overreact to occasional delay fluctuations
• Interarrival delays available in TWCC feedback
• Important to also keep track of inter-send delays, though
35. Delay based controller
• Using delays is less intuitive, but quite widespread recently (e.g., BBR)
• You analyze interarrival delay patterns of packets (feedback from the receiver)
• If delays there are higher than the sending ones, it may indicate buffering
• Buffering somewhere on the network is a symptom of incoming congestion
• Much more proactive than losses
• It’s a way to detect potential congestion before it occurs
• We can use that to adapt and avoid the congestion in the first place
• What kind of delay increase should be used as a trigger here, though?
• Again, important not to overreact to occasional delay fluctuations
• Interarrival delays available in TWCC feedback
• Important to also keep track of inter-send delays, though
36. Delay based controller
• Using delays is less intuitive, but quite widespread recently (e.g., BBR)
• You analyze interarrival delay patterns of packets (feedback from the receiver)
• If delays there are higher than the sending ones, it may indicate buffering
• Buffering somewhere on the network is a symptom of incoming congestion
• Much more proactive than losses
• It’s a way to detect potential congestion before it occurs
• We can use that to adapt and avoid the congestion in the first place
• What kind of delay increase should be used as a trigger here, though?
• Again, important not to overreact to occasional delay fluctuations
• Interarrival delays available in TWCC feedback
• Important to also keep track of inter-send delays, though
38. A useful graph from BBR (thanks Sergio!)
BBR: Congestion-Based Congestion Control 1
1
https://dl.acm.org/doi/pdf/10.1145/3009824
39. Bandwidth probing
• All we’ve seen so far helps going “down”
• Losses, delay increases, etc.
• What if all’s good and we want to see how much we can go “up” instead?
• Acknowledged rate only helps up to a certain extent
• Bandwidth probing helps, here
• “Artificial” packets just used to add bits to the traffic
• If they don’t cause trouble, it means we can send more
• What to use as probes, though, and how much? How often?
• RTX and RTP padding are common choices for the “what”
• The rest is another one of the big secrets!
40. Bandwidth probing
• All we’ve seen so far helps going “down”
• Losses, delay increases, etc.
• What if all’s good and we want to see how much we can go “up” instead?
• Acknowledged rate only helps up to a certain extent
• Bandwidth probing helps, here
• “Artificial” packets just used to add bits to the traffic
• If they don’t cause trouble, it means we can send more
• What to use as probes, though, and how much? How often?
• RTX and RTP padding are common choices for the “what”
• The rest is another one of the big secrets!
41. Bandwidth probing
• All we’ve seen so far helps going “down”
• Losses, delay increases, etc.
• What if all’s good and we want to see how much we can go “up” instead?
• Acknowledged rate only helps up to a certain extent
• Bandwidth probing helps, here
• “Artificial” packets just used to add bits to the traffic
• If they don’t cause trouble, it means we can send more
• What to use as probes, though, and how much? How often?
• RTX and RTP padding are common choices for the “what”
• The rest is another one of the big secrets!
42. Bandwidth probing
• All we’ve seen so far helps going “down”
• Losses, delay increases, etc.
• What if all’s good and we want to see how much we can go “up” instead?
• Acknowledged rate only helps up to a certain extent
• Bandwidth probing helps, here
• “Artificial” packets just used to add bits to the traffic
• If they don’t cause trouble, it means we can send more
• What to use as probes, though, and how much? How often?
• RTX and RTP padding are common choices for the “what”
• The rest is another one of the big secrets!
44. Integration in Janus WebRTC Server
• Janus is a WebRTC server, with a modular nature
• Different plugins will handle media differently
• Almost all stock ones don’t generate media, though
• Any BWE mechanism would need to have some sort of a “hybrid” nature
• Janus core must handle TWCC, monitor losses/delay, do probing, etc
• Still up to the Janus core to generate a BWE estimate too
• Up to plugins to “enforce” and react to it, though
• Additional controls for plugins
• Deciding when to enable BWE (e.g., makes no sense if no media out)
• Notifying bitrate target (e.g., probing needed for next simulcast later)
Experimental pull request (PR)
https://github.com/meetecho/janus-gateway/pull/3278
45. Integration in Janus WebRTC Server
• Janus is a WebRTC server, with a modular nature
• Different plugins will handle media differently
• Almost all stock ones don’t generate media, though
• Any BWE mechanism would need to have some sort of a “hybrid” nature
• Janus core must handle TWCC, monitor losses/delay, do probing, etc
• Still up to the Janus core to generate a BWE estimate too
• Up to plugins to “enforce” and react to it, though
• Additional controls for plugins
• Deciding when to enable BWE (e.g., makes no sense if no media out)
• Notifying bitrate target (e.g., probing needed for next simulcast later)
Experimental pull request (PR)
https://github.com/meetecho/janus-gateway/pull/3278
46. Integration in Janus WebRTC Server
• Janus is a WebRTC server, with a modular nature
• Different plugins will handle media differently
• Almost all stock ones don’t generate media, though
• Any BWE mechanism would need to have some sort of a “hybrid” nature
• Janus core must handle TWCC, monitor losses/delay, do probing, etc
• Still up to the Janus core to generate a BWE estimate too
• Up to plugins to “enforce” and react to it, though
• Additional controls for plugins
• Deciding when to enable BWE (e.g., makes no sense if no media out)
• Notifying bitrate target (e.g., probing needed for next simulcast later)
Experimental pull request (PR)
https://github.com/meetecho/janus-gateway/pull/3278
47. Integration in Janus WebRTC Server
• Janus is a WebRTC server, with a modular nature
• Different plugins will handle media differently
• Almost all stock ones don’t generate media, though
• Any BWE mechanism would need to have some sort of a “hybrid” nature
• Janus core must handle TWCC, monitor losses/delay, do probing, etc
• Still up to the Janus core to generate a BWE estimate too
• Up to plugins to “enforce” and react to it, though
• Additional controls for plugins
• Deciding when to enable BWE (e.g., makes no sense if no media out)
• Notifying bitrate target (e.g., probing needed for next simulcast later)
Experimental pull request (PR)
https://github.com/meetecho/janus-gateway/pull/3278
48. Integration in Janus WebRTC Server (WIP)
• New BWE context as part of the PeerConnection object in the core
• Updated for any outgoing packet (inflight)
• Tracking of packets by TWCC sequence number (size, intersend delays, etc.)
• Awareness of nature of packet (regular vs. retransmission vs. probing)
• Also updated any time TWCC RTCP feedback is received
• Tracking of acknowledged rate, losses, interarrival delays
• Updating current estimate using available information
• PeerConnection loop used to generate BWE-related events
• Dynamic plugin triggers (e.g., enabling/disabling BWE)
• Probing on a regular basis, if needed
• Notifying plugins about current estimate
49. Integration in Janus WebRTC Server (WIP)
• New BWE context as part of the PeerConnection object in the core
• Updated for any outgoing packet (inflight)
• Tracking of packets by TWCC sequence number (size, intersend delays, etc.)
• Awareness of nature of packet (regular vs. retransmission vs. probing)
• Also updated any time TWCC RTCP feedback is received
• Tracking of acknowledged rate, losses, interarrival delays
• Updating current estimate using available information
• PeerConnection loop used to generate BWE-related events
• Dynamic plugin triggers (e.g., enabling/disabling BWE)
• Probing on a regular basis, if needed
• Notifying plugins about current estimate
50. Integration in Janus WebRTC Server (WIP)
• New BWE context as part of the PeerConnection object in the core
• Updated for any outgoing packet (inflight)
• Tracking of packets by TWCC sequence number (size, intersend delays, etc.)
• Awareness of nature of packet (regular vs. retransmission vs. probing)
• Also updated any time TWCC RTCP feedback is received
• Tracking of acknowledged rate, losses, interarrival delays
• Updating current estimate using available information
• PeerConnection loop used to generate BWE-related events
• Dynamic plugin triggers (e.g., enabling/disabling BWE)
• Probing on a regular basis, if needed
• Notifying plugins about current estimate
51. Integration in Janus WebRTC Server (WIP)
• New BWE context as part of the PeerConnection object in the core
• Updated for any outgoing packet (inflight)
• Tracking of packets by TWCC sequence number (size, intersend delays, etc.)
• Awareness of nature of packet (regular vs. retransmission vs. probing)
• Also updated any time TWCC RTCP feedback is received
• Tracking of acknowledged rate, losses, interarrival delays
• Updating current estimate using available information
• PeerConnection loop used to generate BWE-related events
• Dynamic plugin triggers (e.g., enabling/disabling BWE)
• Probing on a regular basis, if needed
• Notifying plugins about current estimate
52. Integration in VideoRoom plugin (WIP)
• As anticipated, it’s up to plugins to “use” the BWE info
• VideoRoom plugin a good place to start (it’s an SFU)
• Simplified assumptions in first integration (only one simulcast video stream)
• When new subscriber is for simulcast video, BWE is enabled via the core
• Bitrates of all publisher streams tracked in real-time
• Allows for knowing what the next target might be
• Core notifies current estimate on a regular basis (or in case of congestion)
• Plugin traverses subscribed streams, and checks published bitrate
• If bandwidth is not enough, goes down to lower substreams/layers
• Notifying core about target bitrate programmatically triggers probing, if needed
53. Integration in VideoRoom plugin (WIP)
• As anticipated, it’s up to plugins to “use” the BWE info
• VideoRoom plugin a good place to start (it’s an SFU)
• Simplified assumptions in first integration (only one simulcast video stream)
• When new subscriber is for simulcast video, BWE is enabled via the core
• Bitrates of all publisher streams tracked in real-time
• Allows for knowing what the next target might be
• Core notifies current estimate on a regular basis (or in case of congestion)
• Plugin traverses subscribed streams, and checks published bitrate
• If bandwidth is not enough, goes down to lower substreams/layers
• Notifying core about target bitrate programmatically triggers probing, if needed
54. Integration in VideoRoom plugin (WIP)
• As anticipated, it’s up to plugins to “use” the BWE info
• VideoRoom plugin a good place to start (it’s an SFU)
• Simplified assumptions in first integration (only one simulcast video stream)
• When new subscriber is for simulcast video, BWE is enabled via the core
• Bitrates of all publisher streams tracked in real-time
• Allows for knowing what the next target might be
• Core notifies current estimate on a regular basis (or in case of congestion)
• Plugin traverses subscribed streams, and checks published bitrate
• If bandwidth is not enough, goes down to lower substreams/layers
• Notifying core about target bitrate programmatically triggers probing, if needed
55. Results of first experimentations
• Testing with 1 publisher and 1 subscriber
• Publisher generates audio and simulcast video stream
• Subscriber gets video from publisher, and aims for highest quality
• Simulating network constraints on Linux with comcast
• Go application that wraps tc and iptables
• https://github.com/tylertreat/comcast/
• Outcome is promising, even though doesn’t always work “great”
• A bit too aggressive going down in case of issues, at the moment
• Is sometimes a bit unstable when “staying there”
• Does manage to go back up when constraints are lifted, though
• Definitely a very good start!
56. Results of first experimentations
• Testing with 1 publisher and 1 subscriber
• Publisher generates audio and simulcast video stream
• Subscriber gets video from publisher, and aims for highest quality
• Simulating network constraints on Linux with comcast
• Go application that wraps tc and iptables
• https://github.com/tylertreat/comcast/
• Outcome is promising, even though doesn’t always work “great”
• A bit too aggressive going down in case of issues, at the moment
• Is sometimes a bit unstable when “staying there”
• Does manage to go back up when constraints are lifted, though
• Definitely a very good start!
57. Results of first experimentations
• Testing with 1 publisher and 1 subscriber
• Publisher generates audio and simulcast video stream
• Subscriber gets video from publisher, and aims for highest quality
• Simulating network constraints on Linux with comcast
• Go application that wraps tc and iptables
• https://github.com/tylertreat/comcast/
• Outcome is promising, even though doesn’t always work “great”
• A bit too aggressive going down in case of issues, at the moment
• Is sometimes a bit unstable when “staying there”
• Does manage to go back up when constraints are lifted, though
• Definitely a very good start!
58. Results of first experimentations
• Testing with 1 publisher and 1 subscriber
• Publisher generates audio and simulcast video stream
• Subscriber gets video from publisher, and aims for highest quality
• Simulating network constraints on Linux with comcast
• Go application that wraps tc and iptables
• https://github.com/tylertreat/comcast/
• Outcome is promising, even though doesn’t always work “great”
• A bit too aggressive going down in case of issues, at the moment
• Is sometimes a bit unstable when “staying there”
• Does manage to go back up when constraints are lifted, though
• Definitely a very good start!
61. What’s next?
• A lot!
• This first integration was mostly to create a testbed and play with BWE
• There’s a lot of work to be done do to make it more reliable
• Will need fine tuning on pretty much everything (testing testing testing!)
• Chrome-based implementations and Firefox do things a bit differently
• Firefox doesn’t do TWCC for audio, so BWE doesn’t cover everything
• How to address that in plugins, which may be unaware?
• Integration in VideoRoom will need to step up too in general
• Current logic is quite crude
• What to do when there’s not enough bandwidth for everything?
• Send everything anyway, knowing it’ll fail?
• First m-lines are served, the others skipped?
• An API driven priority/preference mechanism?
62. What’s next?
• A lot!
• This first integration was mostly to create a testbed and play with BWE
• There’s a lot of work to be done do to make it more reliable
• Will need fine tuning on pretty much everything (testing testing testing!)
• Chrome-based implementations and Firefox do things a bit differently
• Firefox doesn’t do TWCC for audio, so BWE doesn’t cover everything
• How to address that in plugins, which may be unaware?
• Integration in VideoRoom will need to step up too in general
• Current logic is quite crude
• What to do when there’s not enough bandwidth for everything?
• Send everything anyway, knowing it’ll fail?
• First m-lines are served, the others skipped?
• An API driven priority/preference mechanism?
63. What’s next?
• A lot!
• This first integration was mostly to create a testbed and play with BWE
• There’s a lot of work to be done do to make it more reliable
• Will need fine tuning on pretty much everything (testing testing testing!)
• Chrome-based implementations and Firefox do things a bit differently
• Firefox doesn’t do TWCC for audio, so BWE doesn’t cover everything
• How to address that in plugins, which may be unaware?
• Integration in VideoRoom will need to step up too in general
• Current logic is quite crude
• What to do when there’s not enough bandwidth for everything?
• Send everything anyway, knowing it’ll fail?
• First m-lines are served, the others skipped?
• An API driven priority/preference mechanism?