WebRTC is a plug-in free real time communication between the web browsers for facilitating effective means of audio/video media communication in a peer-to-peer fashion through by means of various technologies like Web Sockets,HTML5,JavaScript and protocols like SRTP ,SCTP, NAT and ICE framework.
WebRTC is an exciting new technology that lets you easily add realtime communication capabilities to your web and native apps. Learn more about WebRTC in this presentation from the real-life practitioners at Gruveo (www.gruveo.com).
My talk on webRTC from June 2013
Demo application using XMPP for signalling
open source webRTC using websockets is here: implenentationhttps://github.com/pizuricv/webRTC-over-websockets
WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. It was released by Google in 2011 and it is becoming more famous day by day.
WebRTC is an exciting new technology that lets you easily add realtime communication capabilities to your web and native apps. Learn more about WebRTC in this presentation from the real-life practitioners at Gruveo (www.gruveo.com).
My talk on webRTC from June 2013
Demo application using XMPP for signalling
open source webRTC using websockets is here: implenentationhttps://github.com/pizuricv/webRTC-over-websockets
WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. It was released by Google in 2011 and it is becoming more famous day by day.
Puppeteer can automate that! - AmsterdamJSÖnder Ceylan
Puppeteer is a node library which provides a high-level API to control Chrome over the DevTools Protocol. When combined with the power of the web technologies, it can be used for automating image processing and batch file generation, creating automated visual testing with device emulation, tracking page loading performance, enforcing performance and code coverage budgets on CI, crawling a SPA, capturing a timeline trace of your site to help diagnose performance issues and more!
We'll explore those capabilities of Puppeteer API with combination of DevTools protocol and cloud functions (FaaS) with a showcase of real life use cases demonstrated by live-examples. Finally, we’ll go through the existing puppeteer based SaaS solutions such as Checkly and Browserless.
Spend some time working with OpenAPI and gRPC and you’ll notice that these two technologies have a lot in common. Both are open source efforts, both describe APIs, and both promise better experiences for API producers and consumers. So why do we need both? If we do, what value does each provide? What can each project learn from the other? We’ll bring the two together for a side-by-side comparison and pose answers to these and other questions about two API methodologies that will do much to influence the future of networked APIs.
ASP.NET Core is a significant redesign of ASP.NET. This topic introduces the new concepts in ASP.NET Core and explains how they help you develop modern web apps.
Reaching the multimedia web from embedded platforms with WPEWebkitIgalia
Nowadays the Web is one of the primary ways for multimedia content consumption
and real-time communication (through WebRTC). During this talk Philippe will
present the WPEWebKit web-engine that has been deployed on a wide range of
embedded platforms and how you can add it to your own Linux-based embedded
device. WPEWebKit is the official WebKit upstream port for embedded platforms.
For multimedia playback and real-time communication it heavily relies on the
GStreamer multimedia framework. Philippe will give an overview of the W3C
specifications supported by WPEWebKit. WPEWebKit products have been deployed in
various embedded environments and hardware platforms. Philippe will focus on
i.MX platforms, outlining the steps required to enable WPEWebKit in Yocto-based
BSPs. WPEWebKit can also be used in server-side innovative ways, such as
dynamic HTML/JS/CSS powered video overlaying. Philippe will present this
use-case, detailing how live video streams can be augmented with overlays.
GstWPE is a GStreamer plugin embedding a WPEWebKit WebView, allowing to inject
a live audio/video representation of any Web page into a GStreamer pipeline.
Both GPU-based hardware-accelerated and software rasterisers runtimes are
supported.
(c) Embedded Linux Conference - North America (ELC-NA 2021)
September 27-30, 2021
Hyatt Regency Seattle | Seattle, Washington + Virtual
https://events.linuxfoundation.org/embedded-linux-conference-north-america/
This presentation describes how a web browser works. From the user request, request processing, getting to DNS server, generating the HTTP request, receiving the HTTP response and finally rendering the HTML page and displaying the webpage to the user as requested, this presentation encompasses the entire process of a web browser fetching a webpage.
PS: Pl download to see the entire presentation with the animation involved. Without the animation few slides may seem to be confusing.
Thank you.
The next version of Qt adds a framework for defining and executing hierarchical finite state machines in Qt applications. With Qt State Machines you can effectively model how components react to events over time; these state machines are a natural extension to Qt's event-driven programming model. State machines allow you to express the behavior of your application in a more rigid, explicit way, resulting in code that's easier to test, maintain and extend. This session presents the core concepts and functionality of Qt State Machines.
Presentation by Kent Hansen held during Qt Developer Days 2009.
http://qt.nokia.com/developer/learning/elearning
WebRTC Audio Codec: Opus and processing requirementsTsahi Levent-levi
WebRTC's mandatory codecs are G.711 and Opus. What exactly Opus is, how does it stacks up versus other audio codecs and what challenges does it pose for developers?
The video for this presentation is located at: https://vimeo.com/133079307
Patrick Cason and Kenny House gave a talk to introduce Javascript developers to the basic concepts of WebRTC. In the talk are examples of how to implement WebRTC as well as a high-level overview of basic networking when streaming live audio and video peer-to-peer.
To learn more about how Kenny and Patrick use WebRTC in their work, visit www.octovis.com.
Puppeteer can automate that! - AmsterdamJSÖnder Ceylan
Puppeteer is a node library which provides a high-level API to control Chrome over the DevTools Protocol. When combined with the power of the web technologies, it can be used for automating image processing and batch file generation, creating automated visual testing with device emulation, tracking page loading performance, enforcing performance and code coverage budgets on CI, crawling a SPA, capturing a timeline trace of your site to help diagnose performance issues and more!
We'll explore those capabilities of Puppeteer API with combination of DevTools protocol and cloud functions (FaaS) with a showcase of real life use cases demonstrated by live-examples. Finally, we’ll go through the existing puppeteer based SaaS solutions such as Checkly and Browserless.
Spend some time working with OpenAPI and gRPC and you’ll notice that these two technologies have a lot in common. Both are open source efforts, both describe APIs, and both promise better experiences for API producers and consumers. So why do we need both? If we do, what value does each provide? What can each project learn from the other? We’ll bring the two together for a side-by-side comparison and pose answers to these and other questions about two API methodologies that will do much to influence the future of networked APIs.
ASP.NET Core is a significant redesign of ASP.NET. This topic introduces the new concepts in ASP.NET Core and explains how they help you develop modern web apps.
Reaching the multimedia web from embedded platforms with WPEWebkitIgalia
Nowadays the Web is one of the primary ways for multimedia content consumption
and real-time communication (through WebRTC). During this talk Philippe will
present the WPEWebKit web-engine that has been deployed on a wide range of
embedded platforms and how you can add it to your own Linux-based embedded
device. WPEWebKit is the official WebKit upstream port for embedded platforms.
For multimedia playback and real-time communication it heavily relies on the
GStreamer multimedia framework. Philippe will give an overview of the W3C
specifications supported by WPEWebKit. WPEWebKit products have been deployed in
various embedded environments and hardware platforms. Philippe will focus on
i.MX platforms, outlining the steps required to enable WPEWebKit in Yocto-based
BSPs. WPEWebKit can also be used in server-side innovative ways, such as
dynamic HTML/JS/CSS powered video overlaying. Philippe will present this
use-case, detailing how live video streams can be augmented with overlays.
GstWPE is a GStreamer plugin embedding a WPEWebKit WebView, allowing to inject
a live audio/video representation of any Web page into a GStreamer pipeline.
Both GPU-based hardware-accelerated and software rasterisers runtimes are
supported.
(c) Embedded Linux Conference - North America (ELC-NA 2021)
September 27-30, 2021
Hyatt Regency Seattle | Seattle, Washington + Virtual
https://events.linuxfoundation.org/embedded-linux-conference-north-america/
This presentation describes how a web browser works. From the user request, request processing, getting to DNS server, generating the HTTP request, receiving the HTTP response and finally rendering the HTML page and displaying the webpage to the user as requested, this presentation encompasses the entire process of a web browser fetching a webpage.
PS: Pl download to see the entire presentation with the animation involved. Without the animation few slides may seem to be confusing.
Thank you.
The next version of Qt adds a framework for defining and executing hierarchical finite state machines in Qt applications. With Qt State Machines you can effectively model how components react to events over time; these state machines are a natural extension to Qt's event-driven programming model. State machines allow you to express the behavior of your application in a more rigid, explicit way, resulting in code that's easier to test, maintain and extend. This session presents the core concepts and functionality of Qt State Machines.
Presentation by Kent Hansen held during Qt Developer Days 2009.
http://qt.nokia.com/developer/learning/elearning
WebRTC Audio Codec: Opus and processing requirementsTsahi Levent-levi
WebRTC's mandatory codecs are G.711 and Opus. What exactly Opus is, how does it stacks up versus other audio codecs and what challenges does it pose for developers?
The video for this presentation is located at: https://vimeo.com/133079307
Patrick Cason and Kenny House gave a talk to introduce Javascript developers to the basic concepts of WebRTC. In the talk are examples of how to implement WebRTC as well as a high-level overview of basic networking when streaming live audio and video peer-to-peer.
To learn more about how Kenny and Patrick use WebRTC in their work, visit www.octovis.com.
Course "Machine Learning and Data Mining" for the degree of Computer Engineering at the Politecnico di Milano. In in this lecture we overview the mining of data streams
This presentation aggregates common approaches of real-time client-server communications provided by Web Standards. It focuses on comparison of different techniques like polling, comet, Web Sockets, Server-Sent Events.
Real-Time Applications are no longer a niche – they are a crucial part of the modern internet and our distributed and socially distanced lives.
WebRTC usage has been growing dramatically with the increased need to work from home and to have physically distant interactions due to coronavirus quarantines globally. Alberto will talk about low latency applications architecture, scalability, higher quality, and usability.
A slide deck from my tech talks on WebRTC. These slides give a high-level technical overview of WebRTC, outlining its internal mechanisms and some of the signaling and RTP topologies that are typically seen with it. Plus some of the updates and improvements coming as technology evolves.
describing and comparing different protocols when it come to deploying apis on edge computing devices.
5 different categories are analyzed and 7 protocols are examined
WebRTC Webinar & Q&A - W3C WebRTC JS API Test Platform & Updates from W3C Lis...Amir Zmora
On September 19-23 there was the W3C TPAC meeting in Lisbon. Dan will cover some of the highlights of the recent Lisbon WebRTC meeting, including what items are the sticking points, where work is focusing, progress estimates, and thoughts on what might go into the next version of WebRTC after 1.0 is finished.
Alex will cover the W3C testing platform: "Test The Web Forward". W3C, unlike IETF, is developing and maintaining a complete test suite for all its JS APIs. No specification is actually accepted by W3C and final without the corresponding test suite. Topics that will be addressed include what this testing platform implements, its status with respect to WebRTC and now it is used by different browser vendors as an indication of their compliance with the standards.
As always, we encourage you to submit your general WebRTC related questions beforehand in the Questions & Topics section to make sure we answer them during the session.
Event sponsored by WebRTC.Ventures & Blacc Spot Media
MobileTea Boston presentation on getting started with WebRTC. Includes:
*References on major WebRTC deployments
*WebRTC use cases
*What WebRTC is
*Intro to the WebRTC API's
*How to start developing with WebRTC
*WebRTC scaling challenges
*Chad's favorite WebRTC resources
An introduction to SignalR
This deck was part of my presentation to Virtusa employees on an ASP.NET asynchronous, persistent signaling library known as SignalR
There is also a slide on how to use SignalR with SharePoint.
Date: August 2013
Follow / Tweet me: @ShehanPeruma
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
2. What’s WebRTC?
“ WebRTC is a new front in the long war for an open and
unencumbered web.
— Brendan Eich, inventor of JavaScript
2
3. • Web Real-Time Communication (WebRTC) is an upcoming standard that aims to enable
real-time communication among Web browsers in a peer-to-peer fashion.
• WebRTC project (opensource) aims to allow browsers to natively support interactive peer
to peer communications and real time data collaboration.
• Provide state of art audio/video communication stack in your browser.
What’s WebRTC?
3
4. Earlier Efforts
• Many web services already use RTC, but need downloads, native apps or
plugins. These includes Skype, Facebook (uses Skype) and Google Hangouts
(uses Google Talk plugin).
• Downloading, installing and updating plugins can be complex, error prone
and annoying.
• Plugins can be difficult to deploy, debug, troubleshoot, test and
maintain—and may require licensing and integration with complex,
expensive technology.
4
5. What does it change?
• No licenses or other fees.
• Integration via simple, standardized Web APIs.
• No Proprietary plugins.
• No Security issues.
• No downloads, no installation.
• Just surf to the right address!
5
6. Aims of WebRTC
• State of art audio/video media communication stack in your browser.
• Seamless person-to-person communication.
• Specification to achieve inter-operability among Web browsers.
• To create a common platform for real-time communication- so that your PC,
your Phone, your TV can all communicate.
• Low cost and highly efficient communication solution to enterprises.
6
7. WebRTC Support :
• WebRTC coming to almost all desktop web browsers by EOY -2012.
1. Chrome 21
2. Opera 12
3. Firefox 17
4. IE (via Chrome Frame).
• Mobile browser support also will follow.
• Native C++ versions ofWebRTC stack also available.
7
9. • At startup, browsers do not know each other.
• JavaScript mediates the setup process through server.
• Media flows through the shortest possible path for latency.
Architecture
9
10. Key Features :
• Media Streams :- access to the users camera and mic.
• Peer Connection :- easy audio/video calls.
• Data Channels :- P2P application data transfer.
10
11. 11
WebRTC API Stack View
DataChannel API
PeerConnection API
WebRTC APP
DataChannel API
PeerConnection API
WebRTC APPDTLS
SRTP/SCTP
ICE
UDP
12. Media Streams :
• Represents a media source that is containing 1 or more synchronized media
stream tracks.
• Media stream can be converted to an object URL, and passed to </video> element.
• Use the getUserMedia api to get a media stream for the webcam/mic.
http://webcamtoy.com/app/ --- Uses Canvas andWebGL.
http://bouncyballs.org/ --- Uses Canvas andWebGL.
http://neave.com/tic-tac-toe/ -- Uses Canvas.
12
14. getUserMedia
• A MediaStream is an abstract representation of an actual data stream of audio
or video.
• Serves as a handle for managing actions on the media stream.
• A MediaStream can be extended to represent a stream that either comes from
(remote stream) or is sent to (local stream) a remote node.
• A LocalMediaStream represents a media stream from a local media-capture
device (such as a webcam or microphone). 14
15. 15
getUserMedia
• The MediaStream represents synchronized streams of media. For example, a
stream taken from camera and microphone input has synchronized video and
audio tracks.
• The getUserMedia() method takes three parameters:
• A constraints object.
• A success callback which, if called, is passed a LocalMediaStream.
• A failure callback which, if called, is passed an error object.
• In Chrome, the URL.createObjectURL () method converts a LocalMediaStream to a
Blob URL which can be set as the src of a video element.
16. <video id="sourcevid" autoplay></video>
<script>
var video = document.getElementById('sourcevid');
navigator.getUserMedia('video', success, error);
function success(stream) {
video.src = window.URL.createObjectURL(stream);
}
</script>
16
getUserMedia
18. WebRTC App. Need TO
• Get streaming audio, video or other data.
• Get network information such as IP address and port, and exchange this
with other WebRTC clients (known as peers).
• Coordinate signaling communication to report errors and initiate or close
sessions.
• Exchange information about media and client capability, such as resolution
and codecs. 18
19. RTCPeerConnection
• API for establishing Audio/Video calls (“sessions”).
• Built-in :-
1. Peer-to-Peer
2. Codec control
3. Encryption
4. Bandwidth Management.
5 . C o m m u n i c a t i o n s a r e c o o r d i n a t e d v i a a s i g n a l i n g c h a n n e l p r o v i d e d b y s c r i p t i n g c o d e i n t h e p a g e v i a t h e W e b s e r v e r — f o r i n s t a n c e , u s i n g X M L H t t p R e q u e s t o r W e b S o c k e t .
19
20. In the real world, WebRTC needs servers, so the following can happen:
• Users discover each other and exchange 'real world' details such as names.
• WebRTC client applications (peers) exchange network information.
• Peers exchange data about media such as video format and resolution.
• WebRTC client applications traverse NAT gateways and firewalls.
20
RTCPeerConnection
24. Setting Up a Session :
• To start a session a client needs –
1. Local Session Description (describes the configuration of a local side)
2. Remote Session Description (describes the configuration of remote side)
3. RemoteTransport Candidates (describes how to connect to remote side)
These parameters are exchanged via signalling and communicated to the browser
via PeerConnection api.
The initial session description sent by the caller is called an “Offer”, & the response
from the callee is called an “Answer”.
24
27. Signaling
• Mechanism to coordinate communication and to send control messages.
• Signaling methods and protocols are not specified by WebRTC but by application
developer.
• Signaling is used to exchange three types of information :
• Session control messages : to initialize or close communication and report
errors.
• Network configuration : what's my computer's IP address and port?
• Media capabilities : what codecs and resolutions can be handled by my
browser and the browser it wants to communicate with?
27
28. • The original idea to exchange Session Description information was in the form
of Session Description Protocol (SDP) “blobs”.
• This approach had several shortcomings some of which would be difficult to
address.
• IETF is standardizing the JavaScript Session Establishment Protocol (JSEP).
• JSEP provides the interface an application needs to deal with the negotiated
local and remote session descriptions.
• The JSEP approach leaves the responsibility for driving the signaling state
machine entirely to the application.
• XMLHttpRequest works great for sending request , but receiving them isn’t as
easy.
• App Engine’s Channel API provides the server -> client message path. 28
Signaling
31. NAT Traversal
• Suffice to say that the STUN protocol and its extension TURN are
used by the ICE framework to enable RTCPeerConnection to cope
with NAT traversal.
• Initially, ICE tries to connect peers directly, with the lowest
possible latency, via UDP. In this process, STUN servers have a
single task: to enable a peer behind a NAT to find out its public
address and port.
31
33. • If UDP fails, ICE tries TCP: first HTTP, then HTTPS.
• If direct connection fails—in particular, because of enterprise NAT traversal and
firewalls—ICE uses an intermediary (relay) TURN server.
• In other words, ICE will first use STUN with UDP to directly connect peers and, if that
fails, will fall back to a TURN relay server.
• The expression 'finding candidates' refers to the process of finding network
interfaces and ports.
33
NAT Traversal
35. RTCDataChannel
• As well as audio and video, WebRTC supports real-time communication for
other types of data.
• The RTCDataChannel API will enable peer-to-peer exchange of arbitrary data,
with low latency and high throughput.
• The API has several features to make the most of RTCPeerConnection and
enable powerful and flexible peer-to-peer communication.
35
36. • Stream Control Transmission Protocol (SCTP) encapsulated in DTLS is used to
handle DataChannel Data.
• DataChannel API is bidirectional, which means that each DataChannel bundles
an incoming and an outgoing SCTP stream.
• Encapsulating "SCTP over DTLS over ICE over UDP" provides a NAT traversal
solution together with confidentiality, source authentication, and integrity-
protected transfers.
36
RTCDataChannel
37. Security
There are several ways a real-time communication application or plugin might compromise
security. For example:
• Unencrypted media or data might be intercepted en route between browsers, or
between a browser and a server.
• An application might record and distribute video or audio without the user knowing.
• Malware or viruses might be installed alongside an apparently innocuous plugin or
application.
37
38. WebRTC has several features to avoid these problems:
• WebRTC implementations use secure protocols such as DTLS and SRTP.
• Encryption is mandatory for all WebRTC components, including signaling
mechanisms.
• WebRTC is not a plugin: its components run in the browser sandbox and not in a
separate process, components do not require separate installation, and are
updated whenever the browser is updated.
• Camera and microphone access must be granted explicitly and, when the camera or
microphone are running, this is clearly shown by the user interface.
38
Security
39. Current Limitations
• Cloud Infrastructure – A server is required by WebRTC to complete four tasks: User
discovery, Signalling and NAT/firewall traversal.
• Native Applications – WebRTC enables real-time communication between web browsers.
It is not a software development kit that can be used in native iOS or Android
applications or in native desktop applications.
• Multiparty Conferencing – WebRTC is peer-to-peer by nature which allows WebRTC to be
extremely scalable, but it is very inefficient when setting up communications between
more than two end users.
• Recording – WebRTC does not support recording as of now.
39
40. Conclusion
• The APIs and standards of WebRTC can democratize and decentralize tools
for content creation and communication — for telephony, gaming, video
production, music making, news gathering and many other applications.
• WebRTC will have great impact on open web and interoperable browser
technologies including the existing enterprise solutions.
40
41. References
• Salvatore Loreto, Simon Pietro Romano (2012) ‘Real-Time Communications in the
Web’
- IEEE paper October, 2012
• IETF.org
• WebRTC book by Alan B. Johnston and Daniel C. Burnett : webrtcbook.com .
• Video of Justin Uberti's WebRTC session at Google I/O, 27 June 2012.
• webrtc.org
• Google Developers Google Talk documentation, which gives more information
about NAT traversal, STUN, relay servers and candidate gathering.
• WebPlatform.org
(http://docs.webplatform.org/wiki/concepts/internet_and_web/webrtc) 41