Hear how to develop and implement WebRTC using the new IETF and W3C standards. This session will overview the concepts and structure of WebRTC and how it is defined in the emerging standards. The session will bring everyone up to a clear understanding of WebRTC for the technical discussions in the next session.
This workshop will include specific examples of how to code and create real-time interactions. The session will be interactive, allowing for open and clear discussion.
AstriCon 2015: WebRTC: How it Works, and How it BreaksMojo Lingo
WebRTC is an exciting new technology, perhaps the most exciting thing to happen to voice communication since the invention of Voice over IP. With WebRTC, we are no longer limited to a disjointed communication experience with poor quality audio on antiquated networks. Now we have the ability to put high-definition audio and video where it will have the most impact: right in line with the business processes that benefit the most from it.
This session will present an overview of how WebRTC works, reviewing both the network services that support it and the user-facing software that delivers it. We will look at how Asterisk can be used to give WebRTC additional capabilities that aren’t possible with browsers alone, and how to deploy Asterisk to get the most out of this powerful combination.
As with all new technology, however, there are rough edges. In the final part of this presentation, we will look at the common ways that WebRTC can break down, from technical deployment problems to user interface and design issues. These lessons are drawn from real-world experience deploying WebRTC over the last 3 years and multiple applications that are in production today.
ConnectJS 2015: Video Killed the Telephone StarMojo Lingo
When you want to talk to someone, where do you turn? Skype? Slack or HipChat? Maybe even an old-fashioned telephone? As great (or not) as these are, they all fail in one important way: Context. As developers, why don’t we enable our users to communicate where they are doing everything else, right inside the browser or mobile app The technology to create contextual communications is evolving quickly with exciting technologies like WebRTC. This talk is about how to use WebRTC with Rails to enhance almost any application with voice, video & text. We will cover some of the ways communications can be best employed, including design considerations, as well as available Open Source projects. We will feature a recently released Rails Engine called Talking Stick that makes adding WebRTC to any Rails app a snap.
Now Hear This! Putting Voice, Video, and Text into Ruby on RailsMojo Lingo
When you want to talk to someone, where do you turn? Skype? Slack or HipChat? Maybe even an old-fashioned telephone? As great (or not) as these are, they all fail in one important way: Context. As developers, why don’t we enable our users to communicate where they are doing everything else, right inside the browser or mobile app? The technology to make contextual communications is evolving quickly with exciting technologies like WebRTC, speech recognition and natural language processing. This talk is about how to apply those building blocks and bring contextual communication to your apps.
Presented at RailsConf 2015 in Atlanta, GA
WebRTC has had a tough 3 or 4 years. But it's gone through a rebirth. Node.js developers are a perfect match for the technology. Come and play with it!
Talk given at Cloud Expo / WebRTC Summit in Santa Clara
A presentation for Kamailio World 2017 in Berlin: How Open Standards and Open Source affect national public radio broadcast. My personal view and opinions. Also, some information about Project IrisBroadcast.
AstriCon 2015: WebRTC: How it Works, and How it BreaksMojo Lingo
WebRTC is an exciting new technology, perhaps the most exciting thing to happen to voice communication since the invention of Voice over IP. With WebRTC, we are no longer limited to a disjointed communication experience with poor quality audio on antiquated networks. Now we have the ability to put high-definition audio and video where it will have the most impact: right in line with the business processes that benefit the most from it.
This session will present an overview of how WebRTC works, reviewing both the network services that support it and the user-facing software that delivers it. We will look at how Asterisk can be used to give WebRTC additional capabilities that aren’t possible with browsers alone, and how to deploy Asterisk to get the most out of this powerful combination.
As with all new technology, however, there are rough edges. In the final part of this presentation, we will look at the common ways that WebRTC can break down, from technical deployment problems to user interface and design issues. These lessons are drawn from real-world experience deploying WebRTC over the last 3 years and multiple applications that are in production today.
ConnectJS 2015: Video Killed the Telephone StarMojo Lingo
When you want to talk to someone, where do you turn? Skype? Slack or HipChat? Maybe even an old-fashioned telephone? As great (or not) as these are, they all fail in one important way: Context. As developers, why don’t we enable our users to communicate where they are doing everything else, right inside the browser or mobile app The technology to create contextual communications is evolving quickly with exciting technologies like WebRTC. This talk is about how to use WebRTC with Rails to enhance almost any application with voice, video & text. We will cover some of the ways communications can be best employed, including design considerations, as well as available Open Source projects. We will feature a recently released Rails Engine called Talking Stick that makes adding WebRTC to any Rails app a snap.
Now Hear This! Putting Voice, Video, and Text into Ruby on RailsMojo Lingo
When you want to talk to someone, where do you turn? Skype? Slack or HipChat? Maybe even an old-fashioned telephone? As great (or not) as these are, they all fail in one important way: Context. As developers, why don’t we enable our users to communicate where they are doing everything else, right inside the browser or mobile app? The technology to make contextual communications is evolving quickly with exciting technologies like WebRTC, speech recognition and natural language processing. This talk is about how to apply those building blocks and bring contextual communication to your apps.
Presented at RailsConf 2015 in Atlanta, GA
WebRTC has had a tough 3 or 4 years. But it's gone through a rebirth. Node.js developers are a perfect match for the technology. Come and play with it!
Talk given at Cloud Expo / WebRTC Summit in Santa Clara
A presentation for Kamailio World 2017 in Berlin: How Open Standards and Open Source affect national public radio broadcast. My personal view and opinions. Also, some information about Project IrisBroadcast.
How can you best develop applications using the Session Initiation Protocol? At this presentation at the Communications Developer Conference on September 16, 2008, Voxeo CTO RJ Auburn explained building blocks used to build voice applications, discussed VoiceXML, CCXML and JSR 289 SIP Servlets and then gave a demonstration of voice mashups with Twitter using first CCXML and then SIP Servlets (in Java).
WebRTC gives us a way to do real-time, peer-to-peer communication on the web. In this talk, we'll go over the current state of WebRTC (both the awesome parts and the parts which need to be improved) as well as what could come in the future. Mostly though, we'll take a look at how to combine WebRTC with other web technologies to create great experiences on the front-end for real-time, p2p web apps.
SIP and DNS - federation, failover, load balancing and moreOlle E Johansson
SIP use DNS to find a server for a specific URI, like sip:alice@example.com. With DNS a SIP service can provide failover, load balancing and much more. SIP without DNS is a broken solution. SIP and DNS rocks!
Rome 2017: Building advanced voice assistants and chat botsCisco DevNet
If it takes minutes to code a simple bot, building professional bots represents quite a challenge. Soon you realize you need serious programming and API architecture experience but also “Bot” specific skills. In this session, we'll first show the code of advanced Chat and Voice interactions, and then explore the challenges faced when building advanced Bots (Context storage, NLP approaches, Bot Metadata, OAuth scopes), and discuss interesting opportunities from latest industry trends (Bot platforms, Serverless, Microservices). This talk is about showing the code and sharing lessons learned.
Alberto Gonzalez Trastoy, was among the speakers at Agora’s Real-Time Engagement 2020 Conference. His presentation was about what makes building a live video application more complicated than a regular web app. Isn’t WebRTC supposed to handle everything for you? Alberto describes some of the unexpected nuances and challenges a web developer may encounter building real-time engagement and communications applications. This includes networking, interoperability, scalability and security. He also discusses other complexities in building WebRTC applications and offers tools and alternatives to solve them.
2015 update: SIP and IPv6 issues - staying Happy in SIPOlle E Johansson
What's the state of SIP and IPv6?
- An update I gave at the Netnod spring Meeting 2015.
Nothing much is happening, despite the fact that we have proven real issues with dual stacks in SIP.
Bart Salaets – is Solutions Architect in F5 Networks specifically focusing on service providers in the EMEA region. Prior to this, he has held IP consulting and technical leadership positions in Juniper Networks, Redback Networks and Alcatel-Lucent, giving him more than 15 years of experience in both fixed and mobile broadband IP network design. Bart Salaets was born and still lives in Belgium and holds a Masters degree in Electrical Engineering from the Catholic University of Leuven, Belgium and an MBA from Flanders Business School in Antwerp, Belgium.
Topic of Presentation: Optimising TCP in today’s changing network environment
Language: English
Abstract: The need to juggle performance across wired, wireless and wi-fi networks is a challenge as each of these paths has very different characteristics when it comes to TCP. Tuning the TCP stack to be optimized for the varying degrees of packet loss, latency and congestion on the different connection types is a challenge. This session will cover tuning several aspects of your network and the underlying TCP stack to deliver an optimized application experience for all users. Topics will include:
Choosing the correct Congestion Control algorithm
Optimizing TCP with techniques like TCP buffering and adjusting TCP window sizes
Rate-based pacing to help multiple request/responses over a single connection
The WAN Automation Engine (WAE) is a software platform that provides multivendor and multilayer visibility and analysis for service provider and large enterprise networks. It plays a critical role in answering key questions of network resource availability, and when appropriate can automate and simplify Traffic Engineering mechanisms such as RSVP-TE and Segment Routing. This session will focus on use-cases and APIs for developers.
Watch the DevNet 2035 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=92720&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
Diseño de experiencias turísticas - campañas de promoción y posicionamiento p...ID4you
Presentación de campañas y estrategias de redes sociales para el sector turismo. Enfoque en las experiencias turísticas que permiten las redes sociales y los dispositivos smartphones en los diferentes destinos de viajes para los usuarios.
Charla brindada para el Ministerio de Turismo de la Nación.
How can you best develop applications using the Session Initiation Protocol? At this presentation at the Communications Developer Conference on September 16, 2008, Voxeo CTO RJ Auburn explained building blocks used to build voice applications, discussed VoiceXML, CCXML and JSR 289 SIP Servlets and then gave a demonstration of voice mashups with Twitter using first CCXML and then SIP Servlets (in Java).
WebRTC gives us a way to do real-time, peer-to-peer communication on the web. In this talk, we'll go over the current state of WebRTC (both the awesome parts and the parts which need to be improved) as well as what could come in the future. Mostly though, we'll take a look at how to combine WebRTC with other web technologies to create great experiences on the front-end for real-time, p2p web apps.
SIP and DNS - federation, failover, load balancing and moreOlle E Johansson
SIP use DNS to find a server for a specific URI, like sip:alice@example.com. With DNS a SIP service can provide failover, load balancing and much more. SIP without DNS is a broken solution. SIP and DNS rocks!
Rome 2017: Building advanced voice assistants and chat botsCisco DevNet
If it takes minutes to code a simple bot, building professional bots represents quite a challenge. Soon you realize you need serious programming and API architecture experience but also “Bot” specific skills. In this session, we'll first show the code of advanced Chat and Voice interactions, and then explore the challenges faced when building advanced Bots (Context storage, NLP approaches, Bot Metadata, OAuth scopes), and discuss interesting opportunities from latest industry trends (Bot platforms, Serverless, Microservices). This talk is about showing the code and sharing lessons learned.
Alberto Gonzalez Trastoy, was among the speakers at Agora’s Real-Time Engagement 2020 Conference. His presentation was about what makes building a live video application more complicated than a regular web app. Isn’t WebRTC supposed to handle everything for you? Alberto describes some of the unexpected nuances and challenges a web developer may encounter building real-time engagement and communications applications. This includes networking, interoperability, scalability and security. He also discusses other complexities in building WebRTC applications and offers tools and alternatives to solve them.
2015 update: SIP and IPv6 issues - staying Happy in SIPOlle E Johansson
What's the state of SIP and IPv6?
- An update I gave at the Netnod spring Meeting 2015.
Nothing much is happening, despite the fact that we have proven real issues with dual stacks in SIP.
Bart Salaets – is Solutions Architect in F5 Networks specifically focusing on service providers in the EMEA region. Prior to this, he has held IP consulting and technical leadership positions in Juniper Networks, Redback Networks and Alcatel-Lucent, giving him more than 15 years of experience in both fixed and mobile broadband IP network design. Bart Salaets was born and still lives in Belgium and holds a Masters degree in Electrical Engineering from the Catholic University of Leuven, Belgium and an MBA from Flanders Business School in Antwerp, Belgium.
Topic of Presentation: Optimising TCP in today’s changing network environment
Language: English
Abstract: The need to juggle performance across wired, wireless and wi-fi networks is a challenge as each of these paths has very different characteristics when it comes to TCP. Tuning the TCP stack to be optimized for the varying degrees of packet loss, latency and congestion on the different connection types is a challenge. This session will cover tuning several aspects of your network and the underlying TCP stack to deliver an optimized application experience for all users. Topics will include:
Choosing the correct Congestion Control algorithm
Optimizing TCP with techniques like TCP buffering and adjusting TCP window sizes
Rate-based pacing to help multiple request/responses over a single connection
The WAN Automation Engine (WAE) is a software platform that provides multivendor and multilayer visibility and analysis for service provider and large enterprise networks. It plays a critical role in answering key questions of network resource availability, and when appropriate can automate and simplify Traffic Engineering mechanisms such as RSVP-TE and Segment Routing. This session will focus on use-cases and APIs for developers.
Watch the DevNet 2035 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=92720&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
Diseño de experiencias turísticas - campañas de promoción y posicionamiento p...ID4you
Presentación de campañas y estrategias de redes sociales para el sector turismo. Enfoque en las experiencias turísticas que permiten las redes sociales y los dispositivos smartphones en los diferentes destinos de viajes para los usuarios.
Charla brindada para el Ministerio de Turismo de la Nación.
redes sociales corporativas, redes de comunicación, nueva plataforma de comunicación en la red, una sola plataforma para la gestión de tu comunicación en la red
El Forex Es Un Fraude Luis Gonzalez Espinofrogshole6
Luis Gonzalez Espino Forex Fraude Entre las recomendaciones de la AUI, destaca el utilizar siempre accesos seguros y no realizar compras en paginas no identificadas
TADS Developer Summit WebRTC Dan BurnettAlan Quayle
Dan Burnett, editor of the WebRTC specification and author of The WebRTC Book provides an excellent tutorial on WenRTC at TADS, 21-22 Nov 2013 in Bangkok
Short introduction to WebRTC at the Amsterdam WebRTC Meetup, March 26, 2014Bart Uelen
Introduction to WebRTC presented by Bart Uelen at the first Amsterdam WebRTC Meetup on wednesday March 26, 2014 in the Westergasfabriek in Amsterdam. Let's get together and talk about how to use WebRTC browser-to-browser technology!
Presentation also includes the wrapup slides.
WebRTC enables real-time communication through the web, while SIP is a protocol commonly used for initiating and maintaining real-time communication sessions, particularly in telephony networks.
Bridging WebRTC with SIP is essential in many industries, such as remote healthcare, education, and customer support, where current modern video solutions must communicate with telephony infrastructure at scale. The integration of WebRTC-based video conferencing with legacy SIP-based systems enables seamless communication across platforms and devices. In this presentation, we will talk about lessons learned and explore different approaches to bridging WebRTC and SIP, discussing their advantages and disadvantages.
WebRTC is a plug-in free real time communication between the web browsers for facilitating effective means of audio/video media communication in a peer-to-peer fashion through by means of various technologies like Web Sockets,HTML5,JavaScript and protocols like SRTP ,SCTP, NAT and ICE framework.
MobileTea Boston presentation on getting started with WebRTC. Includes:
*References on major WebRTC deployments
*WebRTC use cases
*What WebRTC is
*Intro to the WebRTC API's
*How to start developing with WebRTC
*WebRTC scaling challenges
*Chad's favorite WebRTC resources
WebRTC Workshop - What is (and isn't WebRTC)Oracle
A brief presentation on WebRTC and Standards delivered in Istanbul, at TAD Summit in a dedicated WebRTC Workshop. Topics include current status of WebRTC standard, a look at WebRTC supported browser, both on desktop and mobile devices
Boost JBoss AS7 with HTML5 WebRTC for Real Time Communicationstelestax
WebRTC, for Web Real Time Communications is a free, open project to enable rich, high quality, Real Time Communications applications to be developed in the browser via simple Javascript APIs and HTML5. Major browsers already support or will support it soon natively. This talk will present an overview of WebRTC, how it is already revolutionizing the Web and changing the Telco industry. A couple of emblematic use cases will be also explored to show the potential of WebRTC in different enterprise markets and a live demo of a 1 to 1 WebRTC Video Conference will also be performed followed by a detailed explanation on how it was achieved as well as what JBoss AS7 additions were required to make it work
Architecting your WebRTC application for scalability, Arin SimeAlan Quayle
TADSummit 2022 8/9 Nov Aveiro Portugal
Architecting your WebRTC application for scalability
Arin Sime, CEO/Founder at WebRTC.ventures and AgilityFeat, & Alberto González Trastoy, CTO at WebRTC.ventures | Software/Telecom Engineer.
There are many ways to architecture your live video application with WebRTC. Open Source and CPaaS media servers are one consideration, but far from the only decision you’ll need to make.
In this session we will give an update on the most popular media servers to consider as well as go deeper into scalability with topics such as deployment using kubernetes/docker, persistence when using multiple SFU/MCU servers, and optimizations available with WebRTC for better performance.
With the publication of the WebRTC specification as a Candidate Recommendation, the work has hit a new milestone. In this session Dan will talk about what this means for WebRTC1.0, including feature stability and testing, and also what is being considered as work for beyond version 1.0.
Tsahi, is gonna make sure you've all got the basic fundamentals of WebRTC under your belt. It's a 101 tutorial, it's a baseline, may have heard it before but we want no one left behind. Already an expert? Then consider this a 20 minute nap time!
In this session, we cover the basics of what WebRTC is, what network components participate in a WebRTC service and where to find the right resources to learn more about WebRTC.
FreeSWITCH, FreeSWITCH Everywhere, and Not A Phone In SightMojo Lingo
That smartphone in your pocket has already replaced your watch, your camera, several volumes of books, whatever music device you may carry, and even in many ways your desktop computer. As technology continues to gobble up and replace legacy devices with ever smaller hardware and ever more capable software, why are we still stuck with a DTMF keypad? As the task of communicating continues to move on to the web and into apps, what will happen to the PSTN.
Today big carriers like AT&T are making plans to finally shut down the copper networks entirely. Let’s talk about the role of FreeSWITCH in this future world, and how can it enable the next generation of communications applications.
In the film “Her” the protagonist falls in love with his computer, an artificial intelligence operating system. While most of us already love Asterisk, things really get interesting when we give Asterisk a voice, and the ability to listen to our instructions.
Fortunately for us, Asterisk has impressive capabilities for adding speech recognition and text-to-speech to our calls. This talk will cover many facets of speech applications with Asterisk. We will look at the various commercial and open source speech engines available, as well as how to integrate them into Asterisk. We will look at ways prompts and grammars can be designed to give the caller the best possible experience. We will hear samples of the right and hilariously wrong ways speech can be used. We will cover the various types of speech recognition that exist today (grammar-driven, transcription, hotword and voice biometrics) and how each should be applied.
Finally, we’ll show how these pieces come together to make it possible to build something that (for a brief moment) passes as intelligent. Maybe.
Tipping the Scales: Measuring and Scaling AsteriskMojo Lingo
In this presentation delivered at AstriCon 2014, we look at answering the question: "Does Asterisk scale?!" The answer is nuanced. The presentation includes terminology and tools, as well as some notes on methodology. In the final few slides, we look at several ways Asterisk is employed (B2BUA, transcoding, conferencing, recording) and the impact each feature has on scaling Asterisk. Using the tools and methodologies presented here, I encourage everyone to test their own voice applications and answer the question for themselves: Does It Scale?!
Opening presentation given at AdhearsionConf 2013. This talks about a vision for the future of the Adhearsion project as well as the future of real-time communications applications.
An overview of the technology options for adding speech to web applications. It covers the HTML5 Speech Input API for speech recognition, using the Audio tag with 3rd party APIs for text-to-speech, and an overview of WebRTC application possibilities.
Presented at the Atlanta Ruby Users Group meeting on November 13, 2013.
An overview of the current state of WebRTC - what it is and how it works. Also included are several example applications showing why WebRTC matters and how it may be deployed in the future.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
WebRTC Overview by Dan Burnett
1. WebRTC
Introduc)on
to
WebRTC
Dan
Burne4
Chief
Scien)st,
Tropo
Director
of
Standards,
Voxeo
Alan
Johnston
Dis)nguished
Engineer
Avaya
2. WebRTC
Tutorial
Topics
•
•
•
•
•
•
•
What
is
WebRTC?
How
to
Use
WebRTC
WebRTC
Peer-‐to-‐Peer
Media
WebRTC
Protocols
and
IETF
Standards
WebRTC
W3C
API
Overview
Pseudo
Code
Walkthrough
PracFcal
bits
AdhearsionConf
2013
2
4. WebRTC
is
“Voice
&
Video
in
the
browser”
• Access
to
camera
and
microphone
without
a
plugin
– No
proprietary
plugin
required!
• Audio/video
direct
from
browser
to
browser
• Why
does
it
maUer?
– Media
can
stay
local
– Mobile
devices
eventually
dropping
voice
channel
anyway
– Games
AdhearsionConf
2013
4
5. The
Browser
RTC
FuncFon
Web
Server
Signaling
Server
HTTP
or
WebSockets
JavaScript/HTML/CSS
Other
APIs
Web
Browser
• WebRTC
adds
new
Real-‐
Time
CommunicaFon
(RTC)
FuncFon
built-‐in
to
browsers
– No
download
HTTP
or
WebSockets
– No
Flash
or
other
plugins
(Signaling)
• Contains
– Audio
and
video
codecs
– Ability
to
negoFate
peer-‐to-‐
peer
connecFons
On-‐the-‐wire
protocols
– Echo
cancellaFon,
packet
loss
(Media
or
Data)
concealement
RTC
APIs
Browser
RTC
FuncFon
NaFve
OS
Services
• In
Chrome
&
Firefox
today,
Internet
Explorer
someFme
and
Safari
eventually
AdhearsionConf
2013
5
6. Benefits
of
WebRTC
For
Developer
For
User
• Streamlined
development
–
one
placorm
• Simple
APIs
–
detailed
knowledge
of
RTC
protocols
not
needed
• NAT
traversal
only
uses
expensive
relays
when
no
other
choice
• Advanced
voice
and
video
codecs
without
licensing
• No
download
or
install
–
easy
to
use
• All
communicaton
encrypted
–
private
• Reliable
session
establishment
– “just
works”
• Excellent
voice
and
video
quality
• Many
more
choices
for
real-‐
Fme
communicaFon
AdhearsionConf
2013
6
7. WebRTC
Support
of
MulFple
Media
Microphone
Audio
ApplicaFon
Sharing
Video
Front
Camera
Video
Rear
Camera
Video
WebCam
Video
Stereo
Audio
Browser
L
on
Laptop
Browser
M
on
Mobile
• MulFple
sources
of
audio
and
video
are
assumed
and
supported
• All
media,
voice
and
video,
and
feedback
messages
are
mulFplexed
over
the
same
transport
address
AdhearsionConf
2013
7
8. WebRTC
Triangle
Web
Server
(ApplicaFon)
Peer
ConnecFon
(Audio,
Video,
and/or
Data)
Browser
L
Browser
M
(Running
HTML5
ApplicaFon
from
Web
Server)
(Running
HTML5
ApplicaFon
from
Web
Server)
• Both
browsers
running
the
same
web
applicaFon
from
web
server
• Peer
ConnecFon
established
between
them
with
the
help
of
the
web
server
AdhearsionConf
2013
8
9. WebRTC
Trapezoid
Web
Server
A
(ApplicaFon
A)
Browser
M
SIP
or
Jingle
Web
Server
B
(ApplicaFon
B)
Peer
ConnecFon
(Audio
and/or
Video)
Browser
T
(Running
HTML5
ApplicaFon
from
Web
Server
B)
(Running
HTML5
ApplicaFon
from
Web
Server
A)
• Similar
to
SIP
Trapezoid
• Web
Servers
communicate
using
SIP
or
Jingle
or
proprietary
• Could
become
important
in
the
future.
AdhearsionConf
2013
9
10. WebRTC
and
SIP
Web
Server
SIP
SIP
Server
SIP
Browser
M
Peer
ConnecFon
(Audio
and/or
Video)
SIP
Client
• SIP
(Session
IniFaFon
Protocol)
is
a
signaling
protocol
used
by
service
providers
and
enterprises
for
real-‐Fme
communcaFon
• Peer
ConnecFon
appears
as
a
standard
RTP
session,
described
by
SDP
• SIP
Endpoint
must
support
RTCWEB
media
extensions
AdhearsionConf
2013
10
11. WebRTC
and
Jingle
Web
Server
Jingle
XMPP
Server
Jingle
Peer
ConnecFon
(Audio
and/or
Video)
Browser
M
Jingle
Client
• Jingle
is
a
signaling
extension
to
XMPP
(Extensible
Messaging
and
Presence
Protocol,
aka
Jabber)
• Peer
ConnecFon
SDP
can
be
mapped
to
Jingle
• Jingle
Endpoint
must
support
RTCWEB
Media
extensions
AdhearsionConf
2013
11
12. WebRTC
and
PSTN
Web
Server
Peer
ConnecFon
(Audio)
PSTN
Gateway
Browser
M
Phone
• Peer
ConnecFon
terminates
on
a
PSTN
Gateway
• Audio
Only
• EncrypFon
ends
at
Gateway
AdhearsionConf
2013
12
13. WebRTC
with
SIP
Web
Server
SIP
Proxy/Registrar
Server
WebSocket
(SIP)
HTTP
(HTML5/CSS/
JavaScript)
Browser
M
(running
JavaScript
SIP
UA)
HTTP
WebSocket
(HTML5/CSS/
(SIP)
JavaScript)
SRTP
Media
Browser
T
(running
JavaScript
SIP
UA)
• Browser
runs
a
SIP
User
Agent
by
running
JavaScript
from
Web
Server
• SRTP
media
connecFon
uses
WebRTC
APIs
• Details
in
[dram-‐iec-‐sipcore-‐websocket]
that
defines
SIP
transport
over
AdhearsionConf
2013
13
WebSockets
14. WebRTC
Signaling
Approaches
• Signaling
is
required
for
exchange
of
candidate
transport
addresses,
codec
informaFon,
media
keying
informaFon
• Many
opFons
–
choice
is
up
to
web
developer
AdhearsionConf
2013
14
16. WebRTC
usage
in
brief
Obtain
Local
Media
Get
more
media
All
media
added
Set
Up
Peer
ConnecFon
Peer
ConnecFon
established
AUach
Media
or
Data
AUach
more
media
or
data
Ready
for
call
Exchange
Offer/Answer
2013
AdhearsionConf
16
17. WebRTC
usage
in
brief
Obtain
Local
Media
Get
more
media
• getUserMedia()
– Audio
and/or
video
– Constraints
– User
permissions
All
media
added
Set
Up
Peer
ConnecFon
• Browser
must
ask
before
allowing
a
page
to
access
microphone
or
camera
Peer
ConnecFon
established
AUach
Media
or
Data
AUach
more
media
or
data
• MediaStream
• MediaStreamTrack
– CapabiliFes
– States
(sepngs)
Ready
for
call
Exchange
Offer/Answer
AdhearsionConf
2013
17
18. WebRTC
usage
in
brief
Obtain
Local
Media
Get
more
media
• RTCPeerConnection
–
All
media
added
–
–
Set
Up
Peer
–
ConnecFon
–
Peer
ConnecFon
established
–
AUach
Media
AUach
more
media
or
data
–
or
Data
–
Ready
for
call
Direct
media
Between
two
peers
ICE
processing
SDP
processing
DTMF
support
Data
channels
IdenFty
verificaFon
StaFsFcs
reporFng
Exchange
Offer/Answer
AdhearsionConf
2013
18
19. WebRTC
usage
in
brief
Obtain
Local
Media
Get
more
media
• addStream()
– Doesn't
change
media
state!
• removeStream()
All
media
added
– DiUo!
Set
Up
Peer
ConnecFon
• createDataChannel()
Peer
ConnecFon
established
AUach
Media
or
Data
– Depends
on
transport
AUach
more
media
or
data
Ready
for
call
Exchange
Offer/Answer
AdhearsionConf
2013
19
20. WebRTC
usage
in
brief
Obtain
Local
Media
Get
more
media
All
media
added
Set
Up
Peer
ConnecFon
Peer
ConnecFon
established
AUach
Media
or
Data
• createOffer(),
createAnswer()
• setLocalDescription(),
setRemoteDescription()
• Applying
SDP
answer
makes
the
magic
happen
AUach
more
media
or
data
Ready
for
call
Exchange
Session
DescripFons
AdhearsionConf
2013
20
21. WebRTC
usage
–
a
bit
more
detail
Set
Up
Signaling
Channel
Obtain
Local
Media
Get
more
media
Set
Up
Peer
ConnecFon
AUach
Media
or
Data
Exchange
Session
AdhearsionConf
2013
DescripFons
AUach
more
media
or
data
21
22. SDP
offer/answer
• Session
DescripFons
– Session
DescripFon
Protocol
created
for
use
by
SIP
in
sepng
up
voice
(and
video)
calls
– Describes
real-‐Fme
media
at
low
level
of
detail
• Which
IP
addresses
and
ports
to
use
• Which
codecs
to
use
• Offer/answer
model
(JSEP)
– One
side
sends
an
SDP
offer
lisFng
what
it
wants
to
send
and
what
it
can
receive
– Other
side
replies
with
an
SDP
answer
lisFng
what
it
will
receive
and
send
AdhearsionConf
2013
22
24. Media
Flows
in
WebRTC
Web
Server
Internet
Home
WiFi
Router
Router
Browser
M
Browser
D
Browser
T
Coffee
Shop
WiFi
Router
Browser
L
AdhearsionConf
2013
24
25. Media
without
WebRTC
Web
Server
Internet
Home
WiFi
Router
Router
Browser
M
Browser
D
Browser
T
Coffee
Shop
WiFi
Router
Browser
L
AdhearsionConf
2013
25
26. Peer-‐to-‐Peer
Media
with
WebRTC
Web
Server
Internet
Home
WiFi
Router
Router
Browser
M
Browser
D
Browser
T
Coffee
Shop
WiFi
Router
Browser
L
AdhearsionConf
2013
26
27. NAT
Complicates
Peer-‐to-‐Peer
Media
Web
Server
Most
browsers
are
behind
NATs
on
the
Internet,
which
complicates
the
establishment
of
peer-‐to-‐peer
media
sessions.
Internet
Router
with
NAT
Home
WiFi
with
NAT
Browser
M
Browser
D
Browser
T
Coffee
Shop
WiFi
with
NAT
Browser
L
AdhearsionConf
2013
27
28. What
is
a
NAT?
• Network
Address
Translator
(NAT)
• Used
to
map
an
inside
address
(usually
a
private
IP
address)
to
outside
address
(usually
a
public
IP
address)
at
Layer
3
• Network
Address
and
Port
TranslaFon
(NAPT)
also
changes
the
transport
port
number
(Layer
4)
– These
are
omen
just
called
NATs
as
well
• One
reason
for
NAT
is
the
IP
address
shortage
AdhearsionConf
2013
28
29. NAT
Example
Internet
“Outside”
Public
IP
Address
203.0.113.4
“Inside”
Private
IP
Addresses
192.168.x.x
Home
WiFi
with
NAT
Browser
M
192.168.0.5
Browser
T
192.168.0.6
AdhearsionConf
2013
29
30. NATs
and
ApplicaFons
• NATs
are
compaFble
with
client/server
protocols
such
as
web,
email,
etc.
• However,
NATs
generally
block
peer-‐to-‐peer
communicaFon
• Typical
NAT
traversal
for
VoIP
and
video
services
today
use
a
media
relay
whenever
the
client
is
behind
a
NAT
– Omen
done
with
an
SBC
–
Session
Border
Controller
– This
is
a
major
expense
and
complicaFon
in
exisFng
VoIP
and
video
systems
• WebRTC
has
a
built-‐in
NAT
traversal
strategy:
InteracFve
ConnecFvity
Establishment
(ICE)
AdhearsionConf
2013
30
31. Peer-‐to-‐Peer
Media
Through
NAT
Web
Server
ICE
connecFvity
checks
can
omen
establish
a
direct
peer-‐
to-‐peer
session
between
browsers
behind
different
NATs
Internet
Router
with
NAT
Home
WiFi
with
NAT
Browser
M
Browser
D
Browser
T
Coffee
Shop
WiFi
with
NAT
Browser
L
AdhearsionConf
2013
31
32. ICE
ConnecFvity
Checks
• ConnecFvity
through
NAT
can
be
achieved
using
ICE
connecFvity
checks
• Browsers
exchange
a
list
of
candidates
– Local:
read
from
network
interfaces
– Reflexive:
obtained
using
a
STUN
Server
– Relayed:
obtained
from
a
TURN
Server
(media
relay)
• Browsers
aUempt
to
send
STUN
packets
to
the
candidate
list
received
from
other
browser
• Checks
performed
by
both
sides
at
same
Fme
• If
one
STUN
packet
gets
through,
a
response
is
sent
and
this
connecFon
used
for
communicaFon
– TURN
relay
will
be
last
resort
(lowest
priority)
AdhearsionConf
2013
32
33. P2P
Media
Can
Stay
Local
to
NAT
If
both
browsers
are
behind
the
same
NAT,
connecFvity
checks
can
omen
establish
a
connecFon
that
never
leaves
the
NAT.
Web
Server
Internet
Router
with
NAT
Home
WiFi
with
NAT
Browser
M
Browser
D
Browser
T
Coffee
Shop
WiFi
with
NAT
Browser
L
AdhearsionConf
2013
33
34. ICE
Servers
Web
Server
STUN
Server
TURN
Server
198.51.100.9
198.51.100.2
ICE
uses
STUN
and
TURN
servers
in
the
public
Internet
to
help
with
NAT
traversal.
Internet
Home
WiFi
with
NAT
203.0.113.4
Router
with
NAT
Browser
M
192.168.0.5
Browser
D
Browser
T
Coffee
Shop
WiFi
with
NAT
Browser
L
AdhearsionConf
2013
34
35. Browser
Queries
STUN
Server
Web
Server
STUN
Server
TURN
Server
198.51.100.9
198.51.100.2
Browser
sends
STUN
test
packet
to
STUN
server
to
learn
its
public
IP
address
(address
of
the
NAT).
Internet
Home
WiFi
with
NAT
203.0.113.4
Router
with
NAT
Browser
M
192.168.0.5
Browser
D
Browser
T
Coffee
Shop
WiFi
with
NAT
Browser
L
AdhearsionConf
2013
35
36. TURN
Server
Can
Relay
Media
Web
Server
STUN
TURN
Server
as
a
Server
Media
Relay
In
some
cases,
connecFvity
checks
fail,
and
a
TURN
Media
Relay
on
the
public
Internet
must
be
used.
Internet
Router
with
NAT
Home
WiFi
with
NAT
Browser
M
Browser
D
Browser
T
Coffee
Shop
WiFi
with
NAT
Browser
L
AdhearsionConf
2013
36
38. WebRTC:
A
Joint
Standards
Effort
• Internet
Engineering
Task
Force
(IETF)
and
World
Wide
Web
ConsorFum
(W3C)
are
working
together
on
WebRTC
• IETF
– Protocols
–
“bits
on
wire”
– Main
protocols
are
already
RFCs,
but
many
extensions
in
progress
– RTCWEB
(Real-‐Time
CommunicaFons
on
the
Web)
Working
Group
is
the
main
focus,
but
other
WGs
involved
as
well
– hUp://www.iec.org
• W3C
– APIs
–
used
by
JavaScript
code
in
HTML5
– hUp://www.w3c.org
AdhearsionConf
2013
38
39. WebRTC
Protocols
ApplicaFon
Layer
HTTP
ICE
WebSocket
SRTP
SDP
STUN
TURN
Transport
Layer
TLS
TCP
Network
Layer
DTLS
UDP
SCTP
IP
SIP
is
not
shown
as
it
is
opFonal
AdhearsionConf
2013
39
41. Codecs
RFC
6716
.
• Mandatory
to
Implement
(MTI)
audio
codecs
are
seUled
on
Opus
and
G.711
(finally!)
• Video
is
not
yet
decided!
AdhearsionConf
2013
41
43. Two
primary
API
secFons
• Handling
local
media
– Media
Capture
and
Streams
(getUserMedia)
specificaFon
• Transmipng
media
– WebRTC
(Peer
ConnecFon)
specificaFon
AdhearsionConf
2013
43
44. Local
Media
Handling
Audio
PresentaFon
Video
“Audio”
Track
Presenter
Stream
“PresentaFon”
Track
“Audio”
Track
Microphone
Audio
Presenter
Video
ApplicaFon
Sharing
Video
Front
Camera
Video
Rear
Camera
Video
“Presenter”
Track
DemonstraFon
Stream
“Audio”
Track
DemonstraFon
Video
Browser
M
Sources
PresentaFon
Stream
Captured
MediaStreams
• In
this
example
“DemonstraFon”
Track
Created
MediaStreams
Tracks
– Captured
4
local
media
streams
– Created
3
media
streams
from
them
– Sent
streams
over
Peer
ConnecFon
AdhearsionConf
2013
44
45. Local
Media
Handling
Audio
PresentaFon
Video
Microphone
Audio
ApplicaFon
Sharing
Video
Browser
M
Sources
• Sources
“Audio”
Track
Presenter
Stream
“PresentaFon”
Track
“Audio”
Track
Presenter
Video
Front
Camera
Video
Rear
Camera
Video
PresentaFon
Stream
“Presenter”
Track
DemonstraFon
Stream
“Audio”
Track
DemonstraFon
Video
Captured
MediaStreams
“DemonstraFon”
Track
Created
MediaStreams
Tracks
– Encoded
together
– Can't
manipulate
individually
AdhearsionConf
2013
45
46. Local
Media
Handling
Audio
PresentaFon
Video
Microphone
Audio
ApplicaFon
Sharing
Video
Browser
M
“Audio”
Track
Presenter
Stream
“PresentaFon”
Track
“Audio”
Track
Presenter
Video
Front
Camera
Video
Rear
Camera
Video
PresentaFon
Stream
“Presenter”
Track
DemonstraFon
Stream
“Audio”
Track
DemonstraFon
Video
Sources
Captured
MediaStreams
• Tracks
(MediaStreamTrack)
“DemonstraFon”
Track
Created
MediaStreams
Tracks
– Tied
to
a
source
– Exist
primarily
as
part
of
Streams;
single
media
type
– Globally
unique
ids;
opFonally
browser-‐labeled
AdhearsionConf
2013
46
47. Local
Media
Handling
PresentaFon
Stream
Audio
“Audio”
Track
PresentaFon
Video
Microphone
Audio
ApplicaFon
Sharing
Video
Presenter
Stream
“Audio”
Track
Presenter
Video
“Presenter”
Track
DemonstraFon
Stream
Front
Camera
Video
Rear
Camera
Video
Browser
M
Sources
“PresentaFon”
Track
“Audio”
Track
DemonstraFon
Video
Captured
MediaStreams
• Captured
MediaStream
“DemonstraFon”
Track
Created
MediaStreams
Tracks
– Returned
from
getUserMedia()
– Permission
check
required
to
obtain
AdhearsionConf
2013
47
48. Local
Media
Handling
Audio
PresentaFon
Video
Microphone
Audio
ApplicaFon
Sharing
Video
Browser
M
Sources
“Audio”
Track
Presenter
Stream
“PresentaFon”
Track
“Audio”
Track
Presenter
Video
Front
Camera
Video
Rear
Camera
Video
PresentaFon
Stream
“Presenter”
Track
DemonstraFon
Stream
“Audio”
Track
DemonstraFon
Video
Captured
MediaStreams
• MediaStream
“DemonstraFon”
Track
Created
MediaStreams
Tracks
– All
contained
tracks
are
synchronized
– Can
be
created,
transmiUed,
etc.
AdhearsionConf
2013
48
49. Local
Media
Handling
• Sepngs
– Current
values
of
source
properFes
(height,
width,
etc.)
– Exposed
on
MediaStreamTrack
• CapabiliFes
– Allowed
values
for
source
properFes
– Exposed
on
MediaStreamTrack
• Constraints
– Requested
ranges
for
track
properFes
– Used
in
getUserMedia(),
applyConstraints()
AdhearsionConf
2013
49
50. Transmipng
media
• Signaling
channel
– Non-‐standard
– Must
exist
to
set
up
Peer
ConnecFon
• Peer
ConnecFon
– Links
together
two
peers
– Add/Remove
Media
Streams
• addStream(),
removeStream()
– Handlers
for
ICE
or
media
change
– Data
Channel
support
AdhearsionConf
2013
50
51. Peer
ConnecFon
• "Links"
together
two
peers
– Via
new RTCPeerConnection()
– Generates
Session
DescripFon
offers/answers
• createOffer(),
createAnswer()
– From
SDP
answers,
iniFates
media
• setLocalDescription(),
setRemoteDescription()
– Offers/answers
MUST
be
relayed
by
applicaFon
code!
– ICE
candidates
can
also
be
relayed
and
added
by
app
• addIceCandidate()
AdhearsionConf
2013
51
52. Peer
ConnecFon
• Handlers
for
signaling,
ICE
or
media
change
– onsignalingstatechange
– onicecandidate,
oniceconnectionstatechange
– onaddstream,
onremovestream
– onnegotiationneeded
– A
few
others
AdhearsionConf
2013
52
53. Peer
ConnecFon
• “Extra”
APIs
– Data
– DTMF
– StaFsFcs
– IdenFty
• Grouped
separately
in
WebRTC
spec
– but
really
part
of
RTCPeerConnection
definiFon
– all
are
mandatory
to
implement
AdhearsionConf
2013
53
54. Data
Channel
API
• RTCDataChannel createDataChannel()
• Configurable
with
–
–
–
–
ordered
maxRetransmits,
maxRetransmitTime
negotiated
id
• Provides
RTCDataChannel
with
– send()
– onopen,
onerror,
onclose,
onmessage*
AdhearsionConf
2013
54
55. DTMF
API
• RTCDTMFSender createDTMFSender()
– Associates
track
input
parameter
with
this
RTCPeerConnection
• RTCDTMFSender
provides
– boolean canInsertDTMF()
– insertDTMF()
– ontonechange
– (other
stuff)
AdhearsionConf
2013
55
56. StaFsFcs
API
• getStats()
– Callback
returns
staFsFcs
for
given
track
• StaFsFcs
available
(local/remote)
are:
– Bytes/packets
xmiUed
– Bytes/packets
received
• May
be
useful
for
congesFon-‐based
adjustments
AdhearsionConf
2013
56
57. IdenFty
API
• setIdentityProvider(),
getIdentityAssertion()
• Used
to
verify
idenFty
via
third
party,
e.g.,
Facebook
Connect
• Both
methods
are
opFonal
• onidentity
handler
called
amer
any
verificaFon
aUempt
• RTCPeerConnection.peerIdentity
holds
any
verified
idenFty
asserFon
AdhearsionConf
2013
57
59. Pseudo
Code
• Close
to
real
code,
but
.
.
.
• No
HTML,
no
signaling
channel,
not
asynchronous,
and
API
is
sFll
in
flux
• Don't
expect
this
to
work
anywhere
AdhearsionConf
2013
59
60. Back
to
first
diagram
Microphone
Audio
ApplicaFon
Sharing
Video
Front
Camera
Video
Rear
Camera
Video
WebCam
Video
Stereo
Audio
Browser
L
on
Laptop
Browser
M
on
Mobile
• Mobile
browser
"calls"
laptop
browser
• Each
sends
media
to
the
other
AdhearsionConf
2013
60
61. Mobile
browser
code
outline
var signalingChannel =
createSignalingChannel();
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
• We
will
look
next
at
each
of
these
• .
.
.
except
for
creaFng
the
signaling
channel
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
61
62. Mobile
browser
produces
.
.
.
Audio
PresentaFon
Stream
“Audio”
Track
“PresentaFon”
Track
PresentaFon
Video
Microphone
Audio
ApplicaFon
Sharing
Video
“Audio”
Track
Presenter
Video
“Presenter”
Track
DemonstraFon
Stream
Front
Camera
Video
Rear
Camera
Video
Presenter
Stream
“Audio”
Track
DemonstraFon
Video
“DemonstraFon”
Track
Browser
M
Sources
Captured
MediaStreams
Created
MediaStreams
• At
least
3
calls
to
getUserMedia()
• Three
calls
to
new MediaStream()
• App
sends
stream
ids,
then
streams
AdhearsionConf
2013
Tracks
62
63. funcFon
getMedia()
[1]
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
. . .
function attachMedia() {
presentation =
new MediaStream(
• Get
audio
• (Get
window
video
–
out
of
scope)
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
63
64. funcFon
getMedia()
[2]
. . .
constraint =
{"video": {"mandatory": {"facingMode": "environment"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"facingMode": "user"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
• Get
front-‐facing
camera
• Get
rear-‐facing
camera
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
64
65. Mobile
browser
code
outline
var signalingChannel =
createSignalingChannel();
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
• We
will
look
next
at
each
of
these
• .
.
.
except
for
creaFng
the
signaling
channel
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
65
66. funcFon
createPC()
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
pc = new RTCPeerConnection(configuration);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
• Create
RTCPeerConnection
• Set
handlers
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
66
67. Mobile
browser
consumes
.
.
.
Audio
&
Video
Stream
Display
“Video”
Track
Right
Headphone
“Right”
Track
Lem
Headphone
Browser
M
“Lem”
Track
(Audio
&
Video
Stream
selected)
Stereo
Stream
“Right”
Track
“Lem”
Track
“Mono”
track
Mono
Stream
Sinks
MediaStreams
• Receives
three
media
streams
• Chooses
one
• Sends
tracks
to
output
channels
AdhearsionConf
2013
Tracks
67
68. FuncFon
handleIncomingStream()
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
• If
incoming
stream
has
video
track,
set
to
av_stream
and
display
it
• If
it
has
two
audio
tracks,
must
be
stereo
• Otherwise,
must
be
the
mono
stream
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
68
69. FuncFon
show_av(st)
display.srcObject =
new MediaStream(st.getVideoTracks()[0]);
left.srcObject =
new MediaStream(st.getAudioTracks()[0]);
right.srcObject =
new MediaStream(st.getAudioTracks()[1]);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
• Using
new
srcObject
property
on
media,
• Set
new
stream
as
source
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
69
70. Mobile
browser
code
outline
var signalingChannel =
createSignalingChannel();
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
• We
will
look
next
at
each
of
these
• .
.
.
except
for
creaFng
the
signaling
channel
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
70
71. funcFon
aUachMedia()
[1]
presentation =
new MediaStream(
[microphone.getAudioTracks()[0],
application.getVideoTracks()[0]]);
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
front.getVideoTracks()[0]]);
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
. . .
// Audio
// Presentation
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// Audio
// Presenter
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
// Audio
// Demonstration
• Create
3
new
streams,
all
with
same
audio
but
different
video
AdhearsionConf
2013
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
71
72. funcFon
aUachMedia()
[2]
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
• AUach
all
3
streams
to
Peer
ConnecFon
• Send
stream
ids
to
peer
(before
streams!)
}
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
72
73. Mobile
browser
code
outline
var signalingChannel =
createSignalingChannel();
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
• We
will
look
next
at
each
of
these
• .
.
.
except
for
creaFng
the
signaling
channel
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
73
74. funcFon
call()
pc.createOffer(gotDescription, e);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
}
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
• Ask
browser
to
create
SDP
offer
• Set
offer
as
local
descripFon
• Send
offer
to
peer
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
AdhearsionConf
2013
74
75. How
do
we
get
the
SDP
answer?
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var microphone, application, front, rear;
var presentation, presenter, demonstration;
var remote_av, stereo, mono;
var display, left, right;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
getMedia();
createPC();
attachMedia();
call();
function getMedia() {
// get local audio (microphone)
navigator.getUserMedia({"audio": true }, function (stream) {
microphone = stream;
}, e);
// get local video (application sharing)
///// This is outside the scope of this specification.
///// Assume that 'application' has been set to this stream.
//
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "front"}}};
navigator.getUserMedia(constraint, function (stream) {
front = stream;
}, e);
constraint =
{"video": {"mandatory": {"videoFacingModeEnum": "rear"}}};
navigator.getUserMedia(constraint, function (stream) {
rear = stream;
}, e);
}
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function attachMedia() {
presentation =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
application.getVideoTracks()[0]]); // Presentation
presenter =
new MediaStream(
[microphone.getAudioTracks()[0],
// Audio
front.getVideoTracks()[0]]);
// Presenter
};
demonstration =
new MediaStream(
[microphone.getAudioTracks()[0],
rear.getVideoTracks()[0]]);
// Audio
// Demonstration
pc.addStream(presentation);
pc.addStream(presenter);
pc.addStream(demonstration);
• Signaling
channel
provides
message
• If
SDP,
set
as
remote
descripFon
• If
ICE
candidate,
tell
the
browser
AdhearsionConf
2013
}
signalingChannel.send(
JSON.stringify({ "presentation": presentation.id,
"presenter": presenter.id,
"demonstration": demonstration.id
}));
function call() {
pc.createOffer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.getVideoTracks().length == 1) {
av_stream = st;
show_av(av_stream);
} else if (st.getAudioTracks().length == 2) {
stereo = st;
} else {
mono = st;
}
}
function show_av(st) {
display.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
left.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
right.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[1]));
}
signalingChannel.onmessage = function (msg) {
var signal = JSON.parse(msg.data);
if (signal.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(signal.sdp), s, e);
} else {
pc.addIceCandidate(
new RTCIceCandidate(signal.candidate));
}
};
75
76. And
now
the
laptop
browser
.
.
.
• Watch
for
the
following
– We
set
up
media
*amer*
receiving
the
offer
– but
the
signaling
channel
sFll
must
exist
first!
– Also,
need
to
save
incoming
stream
ids
AdhearsionConf
2013
76
77. Signaling
channel
message
is
trigger
signalingChannel.onmessage = function (msg) {
if (!pc) {
prepareForIncomingCall();
}
var sgnl = JSON.parse(msg.data);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var webcam, left, right;
var av, stereo, mono;
var incoming;
var speaker, win1, win2, win3;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
function prepareForIncomingCall() {
createPC();
getMedia();
}
attachMedia();
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
. . .
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function getMedia() {
navigator.getUserMedia({"video": true }, function (stream) {
webcam = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "left"}}};
navigator.getUserMedia(constraint, function (stream) {
left = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "right"}}};
navigator.getUserMedia(constraint, function (stream) {
right = stream;
}, e);
}
function attachMedia() {
av = new MediaStream(
[webcam.getVideoTracks()[0],
left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
stereo = new MediaStream(
[left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
mono = left;
};
// Video
// Left audio
// Right audio
// Left audio
// Right audio
// Treat the left audio as the mono stream
pc.addStream(av);
pc.addStream(stereo);
pc.addStream(mono);
}
function answer() {
pc.createAnswer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
• Set
up
PC
and
media
if
not
already
done
if (st.id === incoming.presentation) {
speaker.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
win1.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else if (st.id === incoming.presenter) {
win2.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else {
win3.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
}
}
signalingChannel.onmessage = function (msg) {
if (!pc) {
prepareForIncomingCall();
}
var sgnl = JSON.parse(msg.data);
if (sgnl.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(sgnl.sdp), s, e);
answer();
} else if (sgnl.candidate) {
pc.addIceCandidate(new RTCIceCandidate(sgnl.candidate));
} else {
incoming = sgnl;
}
};
AdhearsionConf
2013
77
78. Signaling
channel
message
is
trigger
signalingChannel.onmessage = function (msg) {
. . .
if (sgnl.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(sgnl.sdp), s, e);
answer();
} else if (sgnl.candidate) {
pc.addIceCandidate(new RTCIceCandidate(sgnl.candidate));
} else {
incoming = sgnl;
}
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var webcam, left, right;
var av, stereo, mono;
var incoming;
var speaker, win1, win2, win3;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
function prepareForIncomingCall() {
createPC();
getMedia();
}
attachMedia();
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function getMedia() {
navigator.getUserMedia({"video": true }, function (stream) {
webcam = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "left"}}};
navigator.getUserMedia(constraint, function (stream) {
left = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "right"}}};
navigator.getUserMedia(constraint, function (stream) {
right = stream;
}, e);
}
function attachMedia() {
av = new MediaStream(
[webcam.getVideoTracks()[0],
left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
stereo = new MediaStream(
[left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
mono = left;
// Video
// Left audio
// Right audio
// Left audio
// Right audio
// Treat the left audio as the mono stream
pc.addStream(av);
pc.addStream(stereo);
pc.addStream(mono);
}
function answer() {
pc.createAnswer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
};
• If
SDP,
*also*
answer
• But
if
neither
SDP
nor
ICE
candidate,
must
be
set
of
incoming
stream
ids,
so
save
AdhearsionConf
2013
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.id === incoming.presentation) {
speaker.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
win1.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else if (st.id === incoming.presenter) {
win2.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else {
win3.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
}
}
signalingChannel.onmessage = function (msg) {
if (!pc) {
prepareForIncomingCall();
}
var sgnl = JSON.parse(msg.data);
if (sgnl.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(sgnl.sdp), s, e);
answer();
} else if (sgnl.candidate) {
pc.addIceCandidate(new RTCIceCandidate(sgnl.candidate));
} else {
incoming = sgnl;
}
};
78
79. FuncFon
prepareForIncomingCall()
createPC();
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var webcam, left, right;
var av, stereo, mono;
getMedia();
var incoming;
var speaker, win1, win2, win3;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
function prepareForIncomingCall() {
createPC();
getMedia();
}
attachMedia();
function createPC() {
pc = new RTCPeerConnection(configuration);
attachMedia();
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function getMedia() {
navigator.getUserMedia({"video": true }, function (stream) {
webcam = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "left"}}};
navigator.getUserMedia(constraint, function (stream) {
left = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "right"}}};
navigator.getUserMedia(constraint, function (stream) {
right = stream;
}, e);
}
function attachMedia() {
av = new MediaStream(
[webcam.getVideoTracks()[0],
left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
stereo = new MediaStream(
[left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
mono = left;
• No
suprises
here
• Media
obtained
is
a
liUle
different
• But
aUached
the
same
way
AdhearsionConf
2013
// Video
// Left audio
// Right audio
// Left audio
// Right audio
// Treat the left audio as the mono stream
pc.addStream(av);
pc.addStream(stereo);
pc.addStream(mono);
}
function answer() {
pc.createAnswer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.id === incoming.presentation) {
speaker.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
win1.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else if (st.id === incoming.presenter) {
win2.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else {
win3.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
}
}
signalingChannel.onmessage = function (msg) {
if (!pc) {
prepareForIncomingCall();
}
var sgnl = JSON.parse(msg.data);
if (sgnl.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(sgnl.sdp), s, e);
answer();
} else if (sgnl.candidate) {
pc.addIceCandidate(new RTCIceCandidate(sgnl.candidate));
} else {
incoming = sgnl;
}
};
79
80. FuncFon
answer()
pc.createAnswer(gotDescription, e);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
"credential":"myPassword"}]};
var webcam, left, right;
var av, stereo, mono;
var incoming;
var speaker, win1, win2, win3;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
function prepareForIncomingCall() {
createPC();
getMedia();
}
attachMedia();
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
signalingChannel.send(JSON.stringify({ "sdp": desc }));
function (evt) {handleIncomingStream(evt.stream);};
}
function getMedia() {
navigator.getUserMedia({"video": true }, function (stream) {
webcam = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "left"}}};
navigator.getUserMedia(constraint, function (stream) {
left = stream;
}, e);
}
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "right"}}};
navigator.getUserMedia(constraint, function (stream) {
right = stream;
}, e);
}
function attachMedia() {
av = new MediaStream(
[webcam.getVideoTracks()[0],
left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
stereo = new MediaStream(
[left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
mono = left;
• createAnswer()
automaFcally
uses
value
of
remoteDescription
when
generaFng
new
SDP
// Video
// Left audio
// Right audio
// Left audio
// Right audio
// Treat the left audio as the mono stream
pc.addStream(av);
pc.addStream(stereo);
pc.addStream(mono);
}
function answer() {
pc.createAnswer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.id === incoming.presentation) {
speaker.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
win1.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else if (st.id === incoming.presenter) {
win2.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else {
win3.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
}
}
signalingChannel.onmessage = function (msg) {
if (!pc) {
prepareForIncomingCall();
}
var sgnl = JSON.parse(msg.data);
if (sgnl.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(sgnl.sdp), s, e);
answer();
} else if (sgnl.candidate) {
pc.addIceCandidate(new RTCIceCandidate(sgnl.candidate));
} else {
incoming = sgnl;
}
};
AdhearsionConf
2013
80
81. Laptop
browser
consumes
.
.
.
PresentaFon
Stream
“Audio”
Track
“PresentaFon”
Track
Presenter
Stream
“Audio”
Track
Speaker
“Presenter”
Track
Display
DemonstraFon
Stream
Display
“Audio”
Track
Display
“DemonstraFon”
Track
Browser
L
(All
video
streams
selected)
Tracks
MediaStreams
Sinks
• Three
input
streams
• All
have
same
#
of
audio
and
video
tracks
• Need
stream
ids
to
disFnguish
AdhearsionConf
2013
81
82. FuncFon
handleIncomingStream()
if (st.id === incoming.presentation) {
speaker.srcObject =
new MediaStream(st.getAudioTracks()[0]);
win1.srcObject =
new MediaStream(st.getVideoTracks()[0]);
} else if (st.id === incoming.presenter) {
win2.srcObject =
new MediaStream(st.getVideoTracks()[0]);
} else {
win3.srcObject =
new MediaStream(st.getVideoTracks()[0]);
}
• Use
ids
to
disFnguish
streams
• Extract
one
audio
and
all
video
tracks
• Assign
to
element
sources
AdhearsionConf
2013
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var webcam, left, right;
var av, stereo, mono;
var incoming;
var speaker, win1, win2, win3;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
function prepareForIncomingCall() {
createPC();
getMedia();
}
attachMedia();
function createPC() {
pc = new RTCPeerConnection(configuration);
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function getMedia() {
navigator.getUserMedia({"video": true }, function (stream) {
webcam = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "left"}}};
navigator.getUserMedia(constraint, function (stream) {
left = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "right"}}};
navigator.getUserMedia(constraint, function (stream) {
right = stream;
}, e);
}
function attachMedia() {
av = new MediaStream(
[webcam.getVideoTracks()[0],
left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
stereo = new MediaStream(
[left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
mono = left;
// Video
// Left audio
// Right audio
// Left audio
// Right audio
// Treat the left audio as the mono stream
pc.addStream(av);
pc.addStream(stereo);
pc.addStream(mono);
}
function answer() {
pc.createAnswer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.id === incoming.presentation) {
speaker.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
win1.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else if (st.id === incoming.presenter) {
win2.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else {
win3.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
}
}
signalingChannel.onmessage = function (msg) {
if (!pc) {
prepareForIncomingCall();
}
var sgnl = JSON.parse(msg.data);
if (sgnl.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(sgnl.sdp), s, e);
answer();
} else if (sgnl.candidate) {
pc.addIceCandidate(new RTCIceCandidate(sgnl.candidate));
} else {
incoming = sgnl;
}
};
82
83. Laptop
browser
produces
.
.
.
Audio
&
Video
Stream
Video
“Video”
Track
WebCam
Lem
Microphone
“Right”
Track
“Lem”
Track
Lem
Audio
Right
Microphone
Stereo
Stream
Browser
L
“Right”
Track
Right
Audio
“Lem”
Track
“Mono”
Track
Mono
Stream
Tracks
Created
MediaStreams
Captured
MediaStreams
Sources
• Three
calls
to
getUserMedia()
• Three
calls
to
new MediaStream()
• No
stream
ids
needed
AdhearsionConf
2013
83
84. FuncFon
getMedia()
[1]
navigator.getUserMedia({"video": true}, function (stream) {
webcam = stream;
}, e);
var pc;
var configuration =
{"iceServers":[{"url":"stun:198.51.100.9"},
{"url":"turn:198.51.100.2",
"credential":"myPassword"}]};
var webcam, left, right;
var av, stereo, mono;
var incoming;
var speaker, win1, win2, win3;
function s(sdp) {} // stub success callback
function e(error) {}
//
stub error callback
var signalingChannel = createSignalingChannel();
function prepareForIncomingCall() {
createPC();
getMedia();
}
attachMedia();
function createPC() {
pc = new RTCPeerConnection(configuration);
. . .
pc.onicecandidate = function (evt) {
signalingChannel.send(
JSON.stringify({ "candidate": evt.candidate }));
};
pc.onaddstream =
function (evt) {handleIncomingStream(evt.stream);};
}
function getMedia() {
navigator.getUserMedia({"video": true }, function (stream) {
webcam = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "left"}}};
navigator.getUserMedia(constraint, function (stream) {
left = stream;
}, e);
constraint =
{"audio": {"mandatory": {"audioDirectionEnum": "right"}}};
navigator.getUserMedia(constraint, function (stream) {
right = stream;
}, e);
}
function attachMedia() {
av = new MediaStream(
[webcam.getVideoTracks()[0],
left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
stereo = new MediaStream(
[left.getAudioTracks()[0],
right.getAudioTracks()[0]]);
mono = left;
• Request
webcam
video
// Video
// Left audio
// Right audio
// Left audio
// Right audio
// Treat the left audio as the mono stream
pc.addStream(av);
pc.addStream(stereo);
pc.addStream(mono);
}
function answer() {
pc.createAnswer(gotDescription, e);
function gotDescription(desc) {
pc.setLocalDescription(desc, s, e);
signalingChannel.send(JSON.stringify({ "sdp": desc }));
}
}
function handleIncomingStream(st) {
if (st.id === incoming.presentation) {
speaker.src = URL.createObjectURL(
new MediaStream(st.getAudioTracks()[0]));
win1.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else if (st.id === incoming.presenter) {
win2.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
} else {
win3.src = URL.createObjectURL(
new MediaStream(st.getVideoTracks()[0]));
}
}
signalingChannel.onmessage = function (msg) {
if (!pc) {
prepareForIncomingCall();
}
var sgnl = JSON.parse(msg.data);
if (sgnl.sdp) {
pc.setRemoteDescription(
new RTCSessionDescription(sgnl.sdp), s, e);
answer();
} else if (sgnl.candidate) {
pc.addIceCandidate(new RTCIceCandidate(sgnl.candidate));
} else {
incoming = sgnl;
}
};
AdhearsionConf
2013
84