The slides for my "Welcome" presentation at JanusCon '19, with an overview on the history of Janus (how it changed in this first five years) and on some possible future directions for the project. Unfortunately, my talk was not recorded, so some slides may look a bit "cryptic" without some vocal context.
Daniel Stenberg explains HTTP/3 and QUIC at GOTO 10, January 22, 2019. This is the slideset, see https://daniel.haxx.se/blog/2019/01/23/http-3-talk-on-video/ for the video.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
As you will see in this film, there are a lot of questions from an interested and educated audience.
Daniel Stenberg is the founder and lead developer of the curl project. He has worked on HTTP implementations for over twenty years. He has been involved in the HTTPbis working group in IETF for ten years and he worked with HTTP in Firefox for years before he left Mozilla. He participates in the QUIC working group and is the author of the widely read documents ”HTTP2 explained” and ”HTTP/3 explained”.
This presentation features a walk through the Linux kernel networking stack covering the essentials and recent developments a developer needs to know. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as segmentation offloading, TCP small queues, and low latency polling. We will cover APIs exposed by the kernel that go beyond use of write()/read() on sockets and will look into how they are implemented on the kernel side.
Deploy ultra low latency at a massive scale with sub-three-second end-to-end latency for audiences as big as you can assemble. Shorten the first and last mile with distribution of datacenters, POPs and nodes across the globe.
Leverage innovative technologies to dramatically reduce time-to-first-frame and provide consistent, low-latency user experience across devices and apps.
Provide intelligent load-balancing and scaling to immediately provide the streaming resources needed to deliver reliable, consistent, ultra low latency viewing experiences to audiences of any size, everywhere.
Enable unprecedented visibility, insight and control throughout the entire streaming workflow, from ingest to playback—allowing you to anticipate, tune and optimize your workflow.
Slides for the presentation I did remotely at Open Source World, to talk about audio-only WebRTC applications, and what we've done in Janus to improve and cover the requirements so far.
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
Cloud computing and OpenStack basic introduction. This presentation was given on November 13, 2014 at Universitat Politecnica de Catalunya. Barcelona, Spain.
Daniel Stenberg explains HTTP/3 and QUIC at GOTO 10, January 22, 2019. This is the slideset, see https://daniel.haxx.se/blog/2019/01/23/http-3-talk-on-video/ for the video.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
As you will see in this film, there are a lot of questions from an interested and educated audience.
Daniel Stenberg is the founder and lead developer of the curl project. He has worked on HTTP implementations for over twenty years. He has been involved in the HTTPbis working group in IETF for ten years and he worked with HTTP in Firefox for years before he left Mozilla. He participates in the QUIC working group and is the author of the widely read documents ”HTTP2 explained” and ”HTTP/3 explained”.
This presentation features a walk through the Linux kernel networking stack covering the essentials and recent developments a developer needs to know. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as segmentation offloading, TCP small queues, and low latency polling. We will cover APIs exposed by the kernel that go beyond use of write()/read() on sockets and will look into how they are implemented on the kernel side.
Deploy ultra low latency at a massive scale with sub-three-second end-to-end latency for audiences as big as you can assemble. Shorten the first and last mile with distribution of datacenters, POPs and nodes across the globe.
Leverage innovative technologies to dramatically reduce time-to-first-frame and provide consistent, low-latency user experience across devices and apps.
Provide intelligent load-balancing and scaling to immediately provide the streaming resources needed to deliver reliable, consistent, ultra low latency viewing experiences to audiences of any size, everywhere.
Enable unprecedented visibility, insight and control throughout the entire streaming workflow, from ingest to playback—allowing you to anticipate, tune and optimize your workflow.
Slides for the presentation I did remotely at Open Source World, to talk about audio-only WebRTC applications, and what we've done in Janus to improve and cover the requirements so far.
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
Cloud computing and OpenStack basic introduction. This presentation was given on November 13, 2014 at Universitat Politecnica de Catalunya. Barcelona, Spain.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
Daniel Stenberg does a presentation about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
Dissecting our Legacy: The Strangler Fig Pattern with Debezium, Apache Kafka ...HostedbyConfluent
There is no denying the fact that many development efforts have to be spent on existing applications - legacy that is - which typically exhibit a monolithic design based on traditional tech stacks. Thus, affected companies strive to move towards distributed architectures and modern technologies.
This talk introduces you to the strangler fig pattern, which aids a smooth and step-wise migration of monolithic applications into separate services. The practical part shows how to apply this pattern to extract parts of a fictional monolith into its own service by featuring:
* Apache Kafka, the de-facto standard for event streaming
* MongoDB and its official connector for Apache Kafka
* plus Debezium, a distributed open-source change-data-capture platform
After experiencing this talk, you have a better understanding and a concrete blueprint how to extract functionality from your monoliths, thereby gradually evolving into a (micro)service architecture and an en vogue tech stack.
Ranch, Caesar or Olive Oil? Different dressings for your SIP salad with Janus! An overview of the different plugins existing (and WIP) in Janus to help with SIP needs, made at the OpenSIPS Summit 2017 in Amsterdam.
Exactly-Once Semantics Revisited: Distributed Transactions across Flink and K...HostedbyConfluent
"Apache Flink’s Exactly-Once Semantics (EOS) integration for writing to Apache Kafka has several pitfalls, due mostly to the fact that the Kafka transaction protocol was not originally designed with distributed transactions in mind. The integration uses Java reflection hacks as a workaround, and the solution can still result in data loss under certain scenarios. Can we do better?
In this session, you’ll see how the Flink and Kafka communities are uniting to tackle these long-standing technical debts. We’ll introduce the basics of how Flink achieves EOS with external systems and explore the common hurdles that are encountered when implementing distributed transactions. Then we’ll dive into the details of the proposed changes to both the Kafka transaction protocol and Flink transaction coordination that seek to provide a more robust integration.
By the end of the talk, you’ll know the unique challenges of EOS with Flink and Kafka and the improvements you can expect across both projects."
SIP transfer with Janus/WebRTC @ OpenSIPS 2022Lorenzo Miniero
These are the slides I presented at the OpenSIPS Summit 2022, where I talked about support for SIP call transfer and multiple lines in Janus, to make those features available to SIP-unaware WebRTC endpoints easily. The presentation also included a few details on a practical interaction with OpenSIPS instances.
Slides for the "Bandwidth Estimation in the Janus WebRTC Server" presentation I made at the new RTC.ON event in Krakow. It covers my journey in BWE, starting from the existing options, up to the decision to start from scratch and create a new approach to create a Janus-based testbed for simulcast subscribers.
Increasingly, organizations are relying on Kafka for mission critical use-cases where high availability and fast recovery times are essential. In particular, enterprise operators need the ability to quickly migrate applications between clusters in order to maintain business continuity during outages. In many cases, out-of-order or missing records are entirely unacceptable. MirrorMaker is a popular tool for replicating topics between clusters, but it has proven inadequate for these enterprise multi-cluster environments. Here we present MirrorMaker 2.0, an upcoming all-new replication engine designed specifically to provide disaster recovery and high availability for Kafka. We describe various replication topologies and recovery strategies using MirrorMaker 2.0 and associated tooling.
Let's dive under the hood of Java network applications. We plan to have a deep look to classic sockets and NIO having live coding examples. Then we discuss performance problems of sockets and find out how NIO can help us to handle 10000+ connections in a single thread. And finally we learn how to build high load application server using Netty.
https://github.com/kslisenko/java-networking
Cassandra Data Modeling - Practical Considerations @ Netflixnkorla1share
Cassandra community has consistently requested that we cover C* schema design concepts. This presentation goes in depth on the following topics:
- Schema design
- Best Practices
- Capacity Planning
- Real World Examples
Slides for the presentation I made at ClueCon 21 on the experimental RED support in WebRTC, and how we've started tinkering with it in Janus. The presentation also addresses a more generic overview on audio features in WebRTC.
Big Data, Data Lake, Fast Data - Dataserialiation-FormatsGuido Schmutz
The concept of "Data Lake" is in everyone's mind today. The idea of storing all the data that accumulates in a company in a central location and making it available sounds very interesting at first. But Data Lake can quickly turn from a clear, beautiful mountain lake into a huge pond, especially if it is inexpertly entrusted with all the source data formats that are common in today's enterprises, such as XML, JSON, CSV or unstructured text data. Who, after some time, still has an overview of which data, which format and how they have developed over different versions? Anyone who wants to help themselves from the Data Lake must ask themselves the same questions over and over again: what information is provided, what data types do they have and how has the content changed over time?
Data serialization frameworks such as Apache Avro and Google Protocol Buffer (Protobuf), which enable platform-independent data modeling and data storage, can help. This talk will discuss the possibilities of Avro and Protobuf and show how they can be used in the context of a data lake and what advantages can be achieved. The support on Avro and Protobuf by Big Data and Fast Data platforms is also a topic.
(BDT318) How Netflix Handles Up To 8 Million Events Per SecondAmazon Web Services
In this session, Netflix provides an overview of Keystone, their new data pipeline. The session covers how Netflix migrated from Suro to Keystone, including the reasons behind the transition and the challenges of zero loss while processing over 400 billion events daily. The session covers in detail how they deploy, operate, and scale Kafka, Samza, Docker, and Apache Mesos in AWS to manage 8 million events & 17 GB per second during peak.
Software Heritage: Building the Universal Software Archive, OW2con'16, Paris.OW2
The goal of the Software Heritage project is to collect, preserve, and share all publicly available software in source code form. Forever.
By doing so Software Heritage will serve the needs of: Society, by preserving our collective technological heritage; Industry, by building the largest software provenance open database; Science, by assembling the largest curated archive for software research; and Education, by creating the ultimate anthology for programming curricula.
Although still in Beta, Software Heritage has already archived more than 2.5 billion unique source code files and 600 million unique commits, spanning more than 20 million projects from major software development hubs, GNU/Linux distributions, and upstream software collections.
Software Heritage is developed transparently as a collaborative project and all its own source code is available as Free/Open Source Software. Currently incubated by Inria, the project will graduate soon to an independent charitable, nonprofit organization.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
Daniel Stenberg does a presentation about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
Dissecting our Legacy: The Strangler Fig Pattern with Debezium, Apache Kafka ...HostedbyConfluent
There is no denying the fact that many development efforts have to be spent on existing applications - legacy that is - which typically exhibit a monolithic design based on traditional tech stacks. Thus, affected companies strive to move towards distributed architectures and modern technologies.
This talk introduces you to the strangler fig pattern, which aids a smooth and step-wise migration of monolithic applications into separate services. The practical part shows how to apply this pattern to extract parts of a fictional monolith into its own service by featuring:
* Apache Kafka, the de-facto standard for event streaming
* MongoDB and its official connector for Apache Kafka
* plus Debezium, a distributed open-source change-data-capture platform
After experiencing this talk, you have a better understanding and a concrete blueprint how to extract functionality from your monoliths, thereby gradually evolving into a (micro)service architecture and an en vogue tech stack.
Ranch, Caesar or Olive Oil? Different dressings for your SIP salad with Janus! An overview of the different plugins existing (and WIP) in Janus to help with SIP needs, made at the OpenSIPS Summit 2017 in Amsterdam.
Exactly-Once Semantics Revisited: Distributed Transactions across Flink and K...HostedbyConfluent
"Apache Flink’s Exactly-Once Semantics (EOS) integration for writing to Apache Kafka has several pitfalls, due mostly to the fact that the Kafka transaction protocol was not originally designed with distributed transactions in mind. The integration uses Java reflection hacks as a workaround, and the solution can still result in data loss under certain scenarios. Can we do better?
In this session, you’ll see how the Flink and Kafka communities are uniting to tackle these long-standing technical debts. We’ll introduce the basics of how Flink achieves EOS with external systems and explore the common hurdles that are encountered when implementing distributed transactions. Then we’ll dive into the details of the proposed changes to both the Kafka transaction protocol and Flink transaction coordination that seek to provide a more robust integration.
By the end of the talk, you’ll know the unique challenges of EOS with Flink and Kafka and the improvements you can expect across both projects."
SIP transfer with Janus/WebRTC @ OpenSIPS 2022Lorenzo Miniero
These are the slides I presented at the OpenSIPS Summit 2022, where I talked about support for SIP call transfer and multiple lines in Janus, to make those features available to SIP-unaware WebRTC endpoints easily. The presentation also included a few details on a practical interaction with OpenSIPS instances.
Slides for the "Bandwidth Estimation in the Janus WebRTC Server" presentation I made at the new RTC.ON event in Krakow. It covers my journey in BWE, starting from the existing options, up to the decision to start from scratch and create a new approach to create a Janus-based testbed for simulcast subscribers.
Increasingly, organizations are relying on Kafka for mission critical use-cases where high availability and fast recovery times are essential. In particular, enterprise operators need the ability to quickly migrate applications between clusters in order to maintain business continuity during outages. In many cases, out-of-order or missing records are entirely unacceptable. MirrorMaker is a popular tool for replicating topics between clusters, but it has proven inadequate for these enterprise multi-cluster environments. Here we present MirrorMaker 2.0, an upcoming all-new replication engine designed specifically to provide disaster recovery and high availability for Kafka. We describe various replication topologies and recovery strategies using MirrorMaker 2.0 and associated tooling.
Let's dive under the hood of Java network applications. We plan to have a deep look to classic sockets and NIO having live coding examples. Then we discuss performance problems of sockets and find out how NIO can help us to handle 10000+ connections in a single thread. And finally we learn how to build high load application server using Netty.
https://github.com/kslisenko/java-networking
Cassandra Data Modeling - Practical Considerations @ Netflixnkorla1share
Cassandra community has consistently requested that we cover C* schema design concepts. This presentation goes in depth on the following topics:
- Schema design
- Best Practices
- Capacity Planning
- Real World Examples
Slides for the presentation I made at ClueCon 21 on the experimental RED support in WebRTC, and how we've started tinkering with it in Janus. The presentation also addresses a more generic overview on audio features in WebRTC.
Big Data, Data Lake, Fast Data - Dataserialiation-FormatsGuido Schmutz
The concept of "Data Lake" is in everyone's mind today. The idea of storing all the data that accumulates in a company in a central location and making it available sounds very interesting at first. But Data Lake can quickly turn from a clear, beautiful mountain lake into a huge pond, especially if it is inexpertly entrusted with all the source data formats that are common in today's enterprises, such as XML, JSON, CSV or unstructured text data. Who, after some time, still has an overview of which data, which format and how they have developed over different versions? Anyone who wants to help themselves from the Data Lake must ask themselves the same questions over and over again: what information is provided, what data types do they have and how has the content changed over time?
Data serialization frameworks such as Apache Avro and Google Protocol Buffer (Protobuf), which enable platform-independent data modeling and data storage, can help. This talk will discuss the possibilities of Avro and Protobuf and show how they can be used in the context of a data lake and what advantages can be achieved. The support on Avro and Protobuf by Big Data and Fast Data platforms is also a topic.
(BDT318) How Netflix Handles Up To 8 Million Events Per SecondAmazon Web Services
In this session, Netflix provides an overview of Keystone, their new data pipeline. The session covers how Netflix migrated from Suro to Keystone, including the reasons behind the transition and the challenges of zero loss while processing over 400 billion events daily. The session covers in detail how they deploy, operate, and scale Kafka, Samza, Docker, and Apache Mesos in AWS to manage 8 million events & 17 GB per second during peak.
Software Heritage: Building the Universal Software Archive, OW2con'16, Paris.OW2
The goal of the Software Heritage project is to collect, preserve, and share all publicly available software in source code form. Forever.
By doing so Software Heritage will serve the needs of: Society, by preserving our collective technological heritage; Industry, by building the largest software provenance open database; Science, by assembling the largest curated archive for software research; and Education, by creating the ultimate anthology for programming curricula.
Although still in Beta, Software Heritage has already archived more than 2.5 billion unique source code files and 600 million unique commits, spanning more than 20 million projects from major software development hubs, GNU/Linux distributions, and upstream software collections.
Software Heritage is developed transparently as a collaborative project and all its own source code is available as Free/Open Source Software. Currently incubated by Inria, the project will graduate soon to an independent charitable, nonprofit organization.
Software Heritage: Archiving the Free Software Commons for Fun & ProfitSpeck&Tech
ABSTRACT: The ambition of the Software Heritage project is to collect, preserve, and share the entire body of free software that is published on the Internet in source code form, together with its development history. Since its public announcement in 2016, the project has assembled the largest collection of freely available software source code for about 5 billion unique source code files and 1 billion commits, coming from more than 80 million projects.
Initially focused on the collection and preservation goals - which were at the time urgent, due to the recurrent disappearances of development forges - Software Heritage has since rolled out several mechanisms to peruse its archive, making progress on the sharing goal.
In this talk, we will review the status of the Software Heritage project, emphasizing how users and developers can, today, benefit from the availability of a great public library of source code.
BIO: Stefano Zacchiroli is Associate Professor of Computer Science at University Paris Diderot on leave at Inria. His research interests span formal methods, software preservation, and Free/Open Source Software engineering. He is co-founder and current CTO of the Software Heritage project. He is an official member of the Debian Project since 2001, where he was elected to serve as Debian Project Leader for 3 terms in a row over the period 2010-2013. He is a former Board Director of the Open Source Initiative (OSI) and recipient of the 2015 O'Reilly Open Source Award.
Blockchain And dapps meetup introductionHu Kenneth
We are blockchain enthusiasts and get together to run this meetup on regular basis, and hope to continue to share everything about blockchain and Dapps with you.
We believe there are many resources in our group, where you can find by joining our telegram group, please let me know if you are having trouble locating us on the app.
Mockito - how a mocking library built a real community (August Penguin 2017)Allon Mureinik
Mockito is one of the best known mocking frameworks for Java, but its greatest feature has to be its awesome community. In this session, l shared my story of how I turned from a Mockito user to a Mockito contributor, and how great open source projects don't just wait for their communities to magically form, but actively encourage them.
Crab - A Python Framework for Building Recommendation SystemsMarcel Caraciolo
Keynote introducing the Framework Crab: A Python toolkit for bulding recommendation engines. It is a open source project as an alternative for Mahout Taste for Python developers.
Presented at XII Python User Group Pernambuco, 07-05-2011 at CIN/UFPE.
Software Heritage: let's build together the universal archive of our software...Codemotion
Free/Open Source Software is now everywhere, but the risk of losing forever some of it is growing. Shutdowns of once popular forges are early warnings that we should not underestimate. How many million lines of code would we lose if development hubs that are hype today were to disappear 20 years from now? This talk will present Software Heritage, whose aim is to collect, preserve, and share all publicly available source code. Forever. Software Heritage has already archived 3 billion distinct source code files and 650 million commits, spanning more than 25 million development projects.
Minou Minou ! Les chat(bot)s continuent leur invasion de l'INTERNETMaxime Pawlak
Minou Minou ! Les chat(bot)s continuent leur invasion de l'INTERNET
- Introduction aux chatbots
- Comment ils fonctionnent
- Pourquoi c'est le futur des interfaces ?
- Déployer son chatbot en 15minutes avec DialogFlow
Talk donné au GDG Toulouse le 12/04/2018
Source de la présentation : https://github.com/maximepawlakfr/talk-minou-minou/
Meetup GDG : https://www.meetup.com/GDG-Toulouse/events/247981837/
Applying the Art of War precepts will help you by giving perspective of the problem about security in WordPress environments, and also the understanding of how to act effectively when something bad happens to your website or e-commerce.
Presenting the way a WordPress is usually hacked, the layer-based model of security and some examples I gathered during my years at Sucuri and GoDaddy Security, I'll try to make you aware of this problem, I'll give some examples of what could happen and how, and provide you with some countermeasures to avoid this to happen whenever it is possible.
Similar to Welcome to JanusCon! -- Past, Present and Future of Janus (20)
WebRTC and SIP not just audio and video @ OpenSIPS 2024Lorenzo Miniero
Slides for my "WebRTC-to-SIP and back: it's not all about audio and video" presentation at the OpenSIPS Summit 2024.
They describe my prototype efforts to add gatewaying support for a few SIP application protocols (T.140 for real-time text and MSRP) to Janus via data channels, with the related implementation challenges and the interesting opportunities they open.
Slides for my "Am I sober or am I trunk? A Janus story" presentation at Kamailio World 2024.
They describe my prototype efforts to add an option to create a trunk between a Janus instance and a SIP server, with the related implementation challenges and the interesting opportunities it opens.
Getting AV1/SVC to work in the Janus WebRTC ServerLorenzo Miniero
Slides for the "Getting AV1/SVC to work in the Janus WebRTC Server" presentation I made at the Real-Time Communications devroom of FOSDEM 2024 in Brussels. It describes in detail how AV1 is used in real-time communications (e.g., RTP packetization rules) and how the Dependency Descriptor extensions allows for SVC to be used in a server, by sharing my experience integrating it in the Janus WebRTC Server.
Slides for the "WebRTC broadcasting: standardization, challenges and opportunities" presentation I made at TADSummit 2023 in Paris. It presents the problems traditional broadcasting has with new scenarios that would benefit from a much lower latency solution, and how WebRTC can help. It also introduces the standard WHIP and WHEP protocols for ingestion and egress, with a few details on how a WebRTC stream could be scaled to a very wide audience using something like SOLEIL (Streaming Of Large scale Events over Internet cLouds).
The challenges of hybrid meetings @ CommCon 2023Lorenzo Miniero
Slides for "The challenges of hybrid meetings" presentation I made at CommCon 2023. It covers how we provided remote participation services to live events before the pandemic, how we had to refactor everything for virtual only events, and what had to be changed again to accomodate audiences that may be evenly split between local and remote participants, with IETF meetings as a practical test case.
Real-Time Text and WebRTC @ Kamailio World 2023Lorenzo Miniero
Slides for my "Bringing real-time text to WebRTC for NG Emergency Services" presentation at Kamailio World 2023.
They describe my prototype efforts to get SIP-based T.140 Real-Time Text to work with WebRTC endpoints via data channels, thanks to Janus acting as a gateway for the purpose.
Slides I presented in the Open Media devroom at FOSDEM 2023, where I gave an intro on how to capture, record and produce music using just open source software on Linux. It's a very high level overview on available software to do different things, and how they can be used together using JACK and/or Pipewire.
These are the slides for the presentation I shared at the virtual edition of IIT-RTC 2022. I talked about how cascading/scalability worked with Janus 0.x, and what steps we've taken to do the same for 1.x (multistream) as well. In particular, the focus is on the new integrated cascading support in the VideoRoom plugin.
Slides for the talk I made at the virtual edition of FOSDEM 2022, on how to use WHIP for WebRTC broadcasting ingestion, and how the distribution process could be done via WebRTC as well, e.g., via Janus (and the SOLEIL architecture).
Slides for the talk I made at IIT-RTC 2021 about WHIP (WebRTC-HTTP ingestion protocol) and how it can help foster adoption of WebRTC in traditional broadcasting tools. The slides also cover my open source implementations of WHIP server (based on Janus) and WHIP client (based on GStreamer), and interoperability tests with other implementations.
These slides cover a workshop called "Having fun with Janus and WebRTC" at the virtual edition of OpenSIPS 2021. The workshop guided viewers to how they could use different features in Janus to build a WebRTC Social TV application, including how to write a new plugin in JavaScript to build a virtual remote.
Just a few slides to talk about the first efforts on JamRTC, a native application based on GStreamer to do live jam sessions using WebRTC and Janus as an SFU. Mostly an overview of the initial architecture, with questions at the end to figure out if the approach is right or not, how to minimize latency, etc.
Scaling WebRTC deployments with multicast @ IETF 110 MBONEDLorenzo Miniero
An overview of how multicast can be used to scale WebRTC deployments, with focus on the Virtual Event Platform used to provide remote participation support to IETF meetings, given during the MBONED WG session at IETF 110.
Slides for the 60 minutes "part 2" Janus workshop I presented at the virtual edition of ClueCon 2021. This time the slides covered Janus ability to bridge WebRTC and non-WebRTC applications to do interesting things, especially with the help of plain RTP and RTP forwarders. Check the conference recordings to see the actual demos in action.
Virtual IETF meetings with WebRTC @ IETF 109 MOPSLorenzo Miniero
An overview of how the Janus WebRTC Server was used to serve virtual IETF meetings at scale, with focus on how audio and video streams were handled in different ways, given during the MOPS WG session at IETF 109. Some considerations on some specific enhancements made between IETF 108 and 109 are provided as well.
Slides for the presentation on how you can get SFUs and MCUs to actually be friends, which I presented at the virtual edition of IIT-RTC 2020. The slides cover some of the pros and cons of both, and some use cases where you may actually want to use both. At the end, a few words are spent on how to use browsers as an MCU instead, which might make them being used with SFUs even easier.
Slides for the presentation on Insertable Streams and E2EE in WebRTC I presented at the virtual edition of ClueCon 2020. After an introduction on the past and recent E2EE efforts, the slides also present some efforts to integrate the technology in the Janus WebRTC server as well.
Slides for the 60 minutes workshop I presented at the virtual edition of ClueCon 2020 (ClueCon Deconstructed). The many slides cover different aspects in Janus, ranging from configuration, to plugins, how to write your own plugin, core features, recording, monitoring, and so on. Unfortunately I didn't have enough time to talk about everything, but slides should be easy to follow anyway.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
46. Fuzzing the hell out of Janus!
https://webrtchacks.com/fuzzing-janus/ (February 2019)
47. Improving simulcast at the IETF hackathon
https://www.meetecho.com/blog/simulcast-janus-ssrc/ (March 2019)
48. Scripting has never been easier!
https://github.com/meetecho/janus-gateway/pull/1647 (v0.7.3, August 2019)
49. A ton of scenarios done today with Janus!
• SIP and RTSP gatewaying
• WebRTC-based call/contact centers
• Conferencing & collaboration
• E-learning & webinars
• Cloud platforms
• Media production
• Broadcasting & Gaming
• Identity verification
• Internet of Things
• Augmented/Virtual Reality
• ...and more!