Challenges exist with media transformation into Visual Cloud services and the flexibility to migrate those services to new HW platforms. Learn how Intel and partners are solving these challenges with highly optimized cloud native media processing, media analytics, and graphics/rendering components to quickly and easily deliver end-to-end visual cloud services with scalable open source software. Two visual cloud services around media delivery and media analytics will be demonstrated to showcase how to enable faster time to market for innovative “new media” services.
WPEWebKit, the WebKit port for embedded platforms (Linaro Connect San Diego 2...Igalia
By Philippe Normand.
WPEWebKit[1] is a WebKit flavor (also known as port) specially crafted for embedded platforms and use-cases. During this talk I would present WPEWebKit's architecture with a special emphasis on its multimedia backend based on GStreamer[2] and implementing support for the MSE[3], EME[4], MediaCapabilities specifications. I would also present a case study on how to successfully integrate WPEWebKit on i.MX6 and i.MX8M platforms with the Cog[5] standalone reference web-app container or within existing Qt5 applications, using the
WPEQt QML plugin.
[1] https://wpewebkit.org
[2] https://gstreamer.freedesktop.org
[3] https://www.w3.org/TR/media-source/
[4] https://www.w3.org/TR/encrypted-media/
[5] https://github.com/Igalia/cog
Linaro Connect San Diego 2019
September 23-27, 2019
https://connect.linaro.org/resources/san19/
Reaching the multimedia web from embedded platforms with WPEWebkitIgalia
Nowadays the Web is one of the primary ways for multimedia content consumption
and real-time communication (through WebRTC). During this talk Philippe will
present the WPEWebKit web-engine that has been deployed on a wide range of
embedded platforms and how you can add it to your own Linux-based embedded
device. WPEWebKit is the official WebKit upstream port for embedded platforms.
For multimedia playback and real-time communication it heavily relies on the
GStreamer multimedia framework. Philippe will give an overview of the W3C
specifications supported by WPEWebKit. WPEWebKit products have been deployed in
various embedded environments and hardware platforms. Philippe will focus on
i.MX platforms, outlining the steps required to enable WPEWebKit in Yocto-based
BSPs. WPEWebKit can also be used in server-side innovative ways, such as
dynamic HTML/JS/CSS powered video overlaying. Philippe will present this
use-case, detailing how live video streams can be augmented with overlays.
GstWPE is a GStreamer plugin embedding a WPEWebKit WebView, allowing to inject
a live audio/video representation of any Web page into a GStreamer pipeline.
Both GPU-based hardware-accelerated and software rasterisers runtimes are
supported.
(c) Embedded Linux Conference - North America (ELC-NA 2021)
September 27-30, 2021
Hyatt Regency Seattle | Seattle, Washington + Virtual
https://events.linuxfoundation.org/embedded-linux-conference-north-america/
>>WATCH THE WEBINAR HERE: https://codefresh.io/docker-based-pipelines-with-codefresh/
Most people think that Docker adoption means deploying Docker images. In this webinar, we will see the alternative way of adopting Docker in a Continuous Integration Pipeline, by packaging all build tools inside Docker containers. This makes it very easy to use different tool versions on the same build and puts an end to version conflicts in build machines. We will use Codefresh as a CI/CD solution as it fully supports pipelines where each build step is running on its own container image.
Sign up for FREE Codefresh account (120 builds/month) at Codefresh.io/codefresh-signup
The CNCF ecosystem is large, diverse and continues to grow. CNCF would like to ensure cross-project interoperability and cross-cloud deployments of all cloud native technologies and show the daily status of builds and deployments on a status dashboard. Cross Cloud CI addresses this need.
WPEWebKit, the WebKit port for embedded platforms (Linaro Connect San Diego 2...Igalia
By Philippe Normand.
WPEWebKit[1] is a WebKit flavor (also known as port) specially crafted for embedded platforms and use-cases. During this talk I would present WPEWebKit's architecture with a special emphasis on its multimedia backend based on GStreamer[2] and implementing support for the MSE[3], EME[4], MediaCapabilities specifications. I would also present a case study on how to successfully integrate WPEWebKit on i.MX6 and i.MX8M platforms with the Cog[5] standalone reference web-app container or within existing Qt5 applications, using the
WPEQt QML plugin.
[1] https://wpewebkit.org
[2] https://gstreamer.freedesktop.org
[3] https://www.w3.org/TR/media-source/
[4] https://www.w3.org/TR/encrypted-media/
[5] https://github.com/Igalia/cog
Linaro Connect San Diego 2019
September 23-27, 2019
https://connect.linaro.org/resources/san19/
Reaching the multimedia web from embedded platforms with WPEWebkitIgalia
Nowadays the Web is one of the primary ways for multimedia content consumption
and real-time communication (through WebRTC). During this talk Philippe will
present the WPEWebKit web-engine that has been deployed on a wide range of
embedded platforms and how you can add it to your own Linux-based embedded
device. WPEWebKit is the official WebKit upstream port for embedded platforms.
For multimedia playback and real-time communication it heavily relies on the
GStreamer multimedia framework. Philippe will give an overview of the W3C
specifications supported by WPEWebKit. WPEWebKit products have been deployed in
various embedded environments and hardware platforms. Philippe will focus on
i.MX platforms, outlining the steps required to enable WPEWebKit in Yocto-based
BSPs. WPEWebKit can also be used in server-side innovative ways, such as
dynamic HTML/JS/CSS powered video overlaying. Philippe will present this
use-case, detailing how live video streams can be augmented with overlays.
GstWPE is a GStreamer plugin embedding a WPEWebKit WebView, allowing to inject
a live audio/video representation of any Web page into a GStreamer pipeline.
Both GPU-based hardware-accelerated and software rasterisers runtimes are
supported.
(c) Embedded Linux Conference - North America (ELC-NA 2021)
September 27-30, 2021
Hyatt Regency Seattle | Seattle, Washington + Virtual
https://events.linuxfoundation.org/embedded-linux-conference-north-america/
>>WATCH THE WEBINAR HERE: https://codefresh.io/docker-based-pipelines-with-codefresh/
Most people think that Docker adoption means deploying Docker images. In this webinar, we will see the alternative way of adopting Docker in a Continuous Integration Pipeline, by packaging all build tools inside Docker containers. This makes it very easy to use different tool versions on the same build and puts an end to version conflicts in build machines. We will use Codefresh as a CI/CD solution as it fully supports pipelines where each build step is running on its own container image.
Sign up for FREE Codefresh account (120 builds/month) at Codefresh.io/codefresh-signup
The CNCF ecosystem is large, diverse and continues to grow. CNCF would like to ensure cross-project interoperability and cross-cloud deployments of all cloud native technologies and show the daily status of builds and deployments on a status dashboard. Cross Cloud CI addresses this need.
This year OWASP Juice Shop saw several significant enhancements and extensions that you will learn all about in this talk: 2x NoSQL injection and 2x typosquatting challenges! Customization and re-branding of the shop to your own corporate look & feel! Juice Shop CTF extension makes setting up hacking events fast & easy! Free "Pwning the OWASP Juice Shop" eBook surpasses 150 pages of in-depth information, hints and solutions for all challenges and more! At AppSecEU the project was promoted into OWASP's "Lab Projects" maturity stage! You can now 3D-print your own Juice Shop merchandise! And much, much more - actually more than can be demonstrated in this 15min session, so best install the Juice Shop yourself afterwards and explore its capabilities yourself!
Serverless is currently the talk of the town and is enjoying increasing popularity. What does serverless actually mean, what are its characteristics and when do you prefer the use of serverless technologies to a container-based solution?
With the Fn project, Oracle now has a serverless open source platform that can run in the cloud, in its own data center or on a developer's local computer. This distinguishes the solution from other serverless platforms on the market. The Fn project is developed by the same team that previously implemented IronFunctions. The framework is based on Docker and currently does not require a managed runtime environment. But is it still serverless?
The session explains basic serverless concepts, benefits and deployment scenarios of the platform-independent Fn Serverless framework.
OpenCL source code is separated “host source code” as C Language file & “kernel(device) source code” as CL file.
But Android’s APK can NOT include “kernel(device) source code” as CL file in APK file.
In this case, I Introduce "OpenCL CL files header Generator". It generates Convert CL files to const char* in Single C header file.
Owasp Juice Shop: Achieving sustainability for open source projectsBjörn Kimminich
OWASP Juice Shop is a "shooting star" among broken web applications. To make sure it does not end as a "one-hit wonder", the project embraces principles and techniques that enhance its sustainability, e.g. Clean Code, TDD, CI/CD, Quality Metrics and Mutation Testing.
In this session you will see how
- even a horrible language such as Javascript can be written in a maintainable manner
- a complete and reliable test suite eliminates the "fear of change"
- automation is a key to increased productivity - even for small open source projects
- free-for-open-source SaaS tools can improve your development process
Where is light, there is shadow! You will also learn
- about some limitations in the automation processes
- the pain keeping Javascript dependencies up to date
- why some 3rd party services had to be dropped
If the Internet gods are with us, we will even perform a production release of OWASP Juice Shop during the session!
PuppetConf 2016: Using Puppet with Kubernetes and OpenShift – Diane Mueller, ...Puppet
Here are the slides from Diane Mueller and Daniel Dreier's PuppetConf 2016 presentation called Using Puppet with Kubernetes and OpenShift. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
2017 - budapest.mobile meetup @ Budapest
I have been an android developer for 5 years now. Beginning of this year I stepped out from my comfort-zone, and tried out React-Native.
After releasing 4 different kind of RN projects, I would like to share my experience and give you some tips so you won't feel like you are fighting for your life when using RN on Android or ios. So what is my "survival kit" contains? A lot of info on project structure and setup, libraries, tools for debugging, best-practices, fastlane, push notifications and much more.
This manual is “How to Build” manual for OpenCV with OpenCL for Android.
If you want to “Use OpenCL on OpenCV” ONLY,
Please see
http://github.com/noritsuna/OpenCVwithOpenCL4AndroidNDKSample
Continuous Code Quality with the Sonar Ecosystem @GeeCON 2017 in PragueRoman Pickl
Continuous Code Quality with the SonarEcosystem
SonarQube is the leading platform for static code analysis and Continuous Code Quality. In this talk we will look into all three lines of defense of the SonarEcosystem and how they can help to find bugs before they enter your codebase (or at least go into production). After this talk, you’ll have a good overview of the SonarEcosystem as well as actionable starting points for increasing your code quality. Furthermore, we will share learnings from using SonarQube for more than 4 years and pointers to additional resources.
Roman Pickl
As Chief Technical Officer, Roman is in charge of the technical development at Fluidtime. He has comprehensive experience in project management, the technical coordination of national and international mobility projects and the optimisation of business and development processes. Roman Pickl studied business management and commercial information technology at the Vienna University of Economics and Business and the University of Technology, Sydney, as well as software engineering at the University of Applied Sciences Technikum Wien. There he specialised in the fields of entrepreneurship & innovation management, project & process management and information management as well as software evolution and mobile computing.
This year OWASP Juice Shop saw several significant enhancements and extensions that you will learn all about in this talk: 2x NoSQL injection and 2x typosquatting challenges! Customization and re-branding of the shop to your own corporate look & feel! Juice Shop CTF extension makes setting up hacking events fast & easy! Free "Pwning the OWASP Juice Shop" eBook surpasses 150 pages of in-depth information, hints and solutions for all challenges and more! At AppSecEU the project was promoted into OWASP's "Lab Projects" maturity stage! You can now 3D-print your own Juice Shop merchandise! And much, much more - actually more than can be demonstrated in this 15min session, so best install the Juice Shop yourself afterwards and explore its capabilities yourself!
Serverless is currently the talk of the town and is enjoying increasing popularity. What does serverless actually mean, what are its characteristics and when do you prefer the use of serverless technologies to a container-based solution?
With the Fn project, Oracle now has a serverless open source platform that can run in the cloud, in its own data center or on a developer's local computer. This distinguishes the solution from other serverless platforms on the market. The Fn project is developed by the same team that previously implemented IronFunctions. The framework is based on Docker and currently does not require a managed runtime environment. But is it still serverless?
The session explains basic serverless concepts, benefits and deployment scenarios of the platform-independent Fn Serverless framework.
OpenCL source code is separated “host source code” as C Language file & “kernel(device) source code” as CL file.
But Android’s APK can NOT include “kernel(device) source code” as CL file in APK file.
In this case, I Introduce "OpenCL CL files header Generator". It generates Convert CL files to const char* in Single C header file.
Owasp Juice Shop: Achieving sustainability for open source projectsBjörn Kimminich
OWASP Juice Shop is a "shooting star" among broken web applications. To make sure it does not end as a "one-hit wonder", the project embraces principles and techniques that enhance its sustainability, e.g. Clean Code, TDD, CI/CD, Quality Metrics and Mutation Testing.
In this session you will see how
- even a horrible language such as Javascript can be written in a maintainable manner
- a complete and reliable test suite eliminates the "fear of change"
- automation is a key to increased productivity - even for small open source projects
- free-for-open-source SaaS tools can improve your development process
Where is light, there is shadow! You will also learn
- about some limitations in the automation processes
- the pain keeping Javascript dependencies up to date
- why some 3rd party services had to be dropped
If the Internet gods are with us, we will even perform a production release of OWASP Juice Shop during the session!
PuppetConf 2016: Using Puppet with Kubernetes and OpenShift – Diane Mueller, ...Puppet
Here are the slides from Diane Mueller and Daniel Dreier's PuppetConf 2016 presentation called Using Puppet with Kubernetes and OpenShift. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
2017 - budapest.mobile meetup @ Budapest
I have been an android developer for 5 years now. Beginning of this year I stepped out from my comfort-zone, and tried out React-Native.
After releasing 4 different kind of RN projects, I would like to share my experience and give you some tips so you won't feel like you are fighting for your life when using RN on Android or ios. So what is my "survival kit" contains? A lot of info on project structure and setup, libraries, tools for debugging, best-practices, fastlane, push notifications and much more.
This manual is “How to Build” manual for OpenCV with OpenCL for Android.
If you want to “Use OpenCL on OpenCV” ONLY,
Please see
http://github.com/noritsuna/OpenCVwithOpenCL4AndroidNDKSample
Continuous Code Quality with the Sonar Ecosystem @GeeCON 2017 in PragueRoman Pickl
Continuous Code Quality with the SonarEcosystem
SonarQube is the leading platform for static code analysis and Continuous Code Quality. In this talk we will look into all three lines of defense of the SonarEcosystem and how they can help to find bugs before they enter your codebase (or at least go into production). After this talk, you’ll have a good overview of the SonarEcosystem as well as actionable starting points for increasing your code quality. Furthermore, we will share learnings from using SonarQube for more than 4 years and pointers to additional resources.
Roman Pickl
As Chief Technical Officer, Roman is in charge of the technical development at Fluidtime. He has comprehensive experience in project management, the technical coordination of national and international mobility projects and the optimisation of business and development processes. Roman Pickl studied business management and commercial information technology at the Vienna University of Economics and Business and the University of Technology, Sydney, as well as software engineering at the University of Applied Sciences Technikum Wien. There he specialised in the fields of entrepreneurship & innovation management, project & process management and information management as well as software evolution and mobile computing.
Chicago Docker Meetup Presentation - MediaflyMediafly
Bryan Murphy's presentation from the 2nd Chicago Docker meetup on March 12, 2014 at Mediafly HQ. In his presentation, Bryan explains how we use Docker right now at Mediafly in production.
When to use Serverless? When to use Kubernetes?Niklas Heidloff
Slides of a session that I have given/will give at various developer conferences in H1 2018.
Niklas Heidloff
http://twitter.com/nheidloff
http://heidloff.net
Summary Article
http://heidloff.net/article/when-to-use-serverless-kubernetes
OpenWhisk
https://openwhisk.apache.org
https://github.com/ibm-functions/composer
https://github.com/nheidloff/openwhisk-debug-nodejs
Kubernetes
https://kubernetes.io
https://istio.io
IBM Cloud
http://ibm.biz/nheidloff
Abstract
There is a lot of debate whether to use Serverless or Kubernetes to build cloud-native apps. Both have their advantages and unique capabilities which developers should take into consideration when planning new projects. We will throw some light on the topics ease of use, maturity, types of scenarios, developer productivity and debugging, supported languages, DevOps and monitoring, performance, community and pricing. Cloud-native architectures shift the complexity from within an application to orchestrations of Microservices. Both Kubernetes and Serverless have their strengths which we will discuss. Besides the core development topics, developers should also understand operational aspects how complicated it is to maintain your own systems versus using managed platforms.
DCC Labs provides DVB compliant middleware and other embedded software for Set-Top Boxes and digital TV devices. We specialize in small footprint, optimised performance applications running under Linux, OS20, OS21 and similar operating systems.
apidays LIVE Paris 2021 - APIGEE, different ways for integrating with CI/CD p...apidays
apidays LIVE Paris 2021 - APIs and the Future of Software
December 7, 8 & 9, 2021
APIGEE, different ways for integrating with CI/CD pipelines
Nejmeddine Ben Ouarred, Head Of API Practice at Sfeir
Avoid the Vendor Lock-in Trap (with App Deployment)Peter Bittner
There is no such thing as "marriage" in business. When you're not happy with the service or pricing you move on. But at what price? Switching a technology is hard, switching a platform is harder! Simply follow a set of principles and techniques to ensure your freedom and agility.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
In this video from the Blue Waters 2018 Symposium, Maxim Belkin presents a tutorial on Containers: Shifter and Singularity on Blue Waters.
Container solutions are a great way to seamlessly execute code on a variety of platforms. Not only they are used to abstract away from the software stack of the underlying operating system, they also enable reproducible computational research. In this mini-tutorial, I will review the process of working with Shifter and Singularity on Blue Waters.
Watch the video: https://wp.me/p3RLHQ-iXO
Learn more: https://bluewaters.ncsa.illinois.edu/blue-waters-symposium-2018
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The Open Network Automation Platform (ONAP) is a leading Linux Foundation Networking open source project that provides fully automated orchestration and lifecycle management of NFV, SDN, analytics and edge computing services. While ONAP can be used for any network service, it is particularly beneficial for 5G and edge computing use cases. In this talk you will learn:
* What is ONAP
* What use cases does ONAP support
* What are the 5G/edge computing workload automation requirements
* How does ONAP support these requirements
* How can you get involved
Easing the Path to Network Transformation - Network Transformation Experience...Liz Warner
Network transformation takes many forms: open platforms, virtualized infrastructure, containers and cloud native practices—and often a mix of any of these. Regardless of choice, the path to transformation typically requires new tools and new skills. Network Transformation Experience Kits provide a library of best-practice architecture and development guidelines addressing Industry needs in automation, interfaces standardization, security, resources management and more. These Experience Kits offer developers, technical leads, and other audiences a variety of materials needed to enable adoption of the new technologies and service-enabling capabilities needed for next-generation, open, agile and efficient networks. In this presentation, we will focus on containers technology to augment ease of use with high performance.
CNTT is an Converged NFVi Telco Task force setup to standardize Reference Implementations (RI) and compliances for Service Providers to use common Models, Architecture and hence Deployments for their workloads. Given the NFVi proliferations in the past the limited set of RI chosen are one based on OpenStack and another Kubernetes. Tier 1 service providers and participating Tier2 & 3 providers in CNTT community welcome it. The incumbent vendors and emerging players in disaggregated workloads are watching as how they can participate without downside to their bottom-line. This is interesting and early adapters will gain as they can plan to align their products and solutions in this space. The evolution of Infrastructure demands the changes as you will see in next topic Airship.
Airship is a bunch of tools and software that will allow you to elevate your infrastructure, especially if you are a Communication Service Provider. We here bring you from Airship community all that has happened as part of Rebuilding the Data Center Infrastructure for next generation. With multi-layer, multi-node, multi-cluster, multi-tenant, multi-cloud, hybrid-cloud the only way is cloud native and that requires Open Infrastructure. This is then Followed by where (ONFV/ONAP/TUG) and how this Airship will migrate from 1.0 to 2.0 to enable scaling functions anywhere. Thus was born the Airship and now moving toward overlay cluster deployments for workloads. It can be engineered to support Any over Any (OoK, KoK,AoA) name it and we need plugins and drivers for k8s to deliver all the features. This will be engineering challenge as well a mindset change and will take few years to Get to NextGen workloads besides CNTT NFVi/VNF & CNF.
Your Path to Edge Computing - Akraino Edge Stack UpdateLiz Warner
The Akraino community was proud to announce the availability of its release 1 on June 6th. The community has experienced extremely rapid growth over the past year, in terms of both membership and community activity. Before Akraino, developers had to download multiple open source software packages and integrate/test on deployable hardware, which prolonged innovation and increased cost. The Akraino community came up with a brilliant way to solve this integration challenge with the Blueprint model. An Akraino Blueprint is not just a diagram; it’s real code that brings everything together so users can download and deploy the edge stack in their own environment to address a specific edge use case. Learn more about the Akraino Edge Stack. In this talk, we will share details about R1 blueprints and their use, R2 goals, and how to engage and contribute to the Akraino Community.
Introduction to Tungsten Fabric and the vRouterLiz Warner
Tungsten Fabric is an open source network virtualization solution for providing connectivity and security for virtual, containerized or bare-metal workloads. Savannah will cover the overall architecture of Tungsten Fabric and the DPDK vRouter, which performs packet forwarding and enforces network and security policies.
Introduce a connected vehicle blueprint; a Linux Akraino Project. The presentation consists of general background introduction, application use cases, network/technical/deployment architecture and the future plan.
ONAP and the K8s Ecosystem: A Converged Edge Application & Network Function P...Liz Warner
The edge computing industry is increasingly using cloud technologies for seamless migration of workloads across edges and clouds. For seamless mobility workloads, K8s is a key requirement for all CSPs. Also, K8s is can be a good workload orchestrator for all deployment types (VMs, containers and functions). This panel will discuss existing work and novel ways of realizing a converged network function & edge computing application platform across distributed clouds using the extensibility of the K8s ecosystem. This work is currently happening in ONAP as part of the Edge Automation effort and we see this impactful to other open source efforts such as Akraino, K8s Edge WG etc.
Networks need to incorporate innovative and high-performance packet processing entities to meet the demands of meteoric rise in data coupled with advances in compute capacity and innovative apps. A fully programmable forwarding plane enables network owners to build the network they want and evolve it as the needs change. P4 is a domain specific language for networking and it empowers network builders to craft the functionality they need in a high-level programming language and execute it at line-rate on a variety of devices including the Barefoot Tofino series of Ethernet switches. This talk will give an overview of P4 and go over a couple of use-cases.
Enabling the Deployment of Edge Services with the Open Network Edge Services ...Liz Warner
The Open Network Edge Services Toolkit (OpenNESS) is an open-source software toolkit for the enablement of orchestration and management of edge services on a diverse range of platforms. This talk will present the problem statement that OpenNESS aims to solve, the use-cases in which OpenNESS can be deployed, and a top-level description of its architecture.
Unleashing the Power of Fabric Orchestrating New Performance Features for SR-...Liz Warner
There are lot of SRIOV features which are not yet exposed to cloud to make the best use of the underlying fabric ethernet and due to lack of tooling on kernel and OS these features couldn’t be used by Virtual Network Functions workloads. This presentation will explain all the new NIC card features that can be used by SRIOV workloads to get the best out of the fabric. We will also discuss the changes required at kernel level drivers to expose those features so that cloud workloads can leverage these by OS APIs for orchestration. We will also demo one of the hardware features and also go over Its implementation details including development and test pipeline using zuulv3.
Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...Liz Warner
Transformation of network softwarization towards 5G inherently requires satisfying the requirements across a broad scope of verticals while maintaining Quality of Service (QoS) and Quality of Experience (QoE) criteria required to satisfy various network slice constraints. This session with hands-on lab introduces 3 key elements of service assurance – Monitoring, Presentation & provisioning layers and introduction to various cloud-native open source frameworks like Collectd, Influxdb, Grafana, Prometheus, Kafka and Platform for Network Data Analytics (PNDA).
Closed-Loop Platform Automation by Tong Zhong and Emma CollinsLiz Warner
Closed-loop automation would dramatically help with the network transformation which is central to our business. Building a general analytics workflow to support various use cases (such as power management, fault prediction, networking slicing, etc.) is a critical component in the overall platform.
Closed-Loop Network Automation for Optimal Resource Allocation via Reinforcem...Liz Warner
In this talk, we present a closed-loop automation approach to dynamically adjust LLC cache allocation (Intel RDT) between high priority VNFs and BE workloads using reinforcement learning. The results demonstrated improved server utilization while maintaining required service level agreement for high priority VNFs.
Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...Liz Warner
Transformation of network softwarization towards 5G inherently requires satisfying the requirements across a broad scope of verticals while maintaining Quality of Service (QoS) and Quality of Experience (QoE) criteria required to satisfy various network slice constraints. This session with hands-on lab introduces 3 key elements of service assurance – Monitoring, Presentation & provisioning layers and introduction to various cloud-native open source frameworks like Collectd, Influxdb, Grafana, Prometheus, Kafka and Platform for Network Data Analytics (PNDA).
Closed Loop Network Automation for Optimal Resource Allocation via Reinforcem...Liz Warner
In this talk, we present a closed-loop automation approach to dynamically adjust LLC cache allocation (Intel RDT) between high priority VNFs and BE workloads using reinforcement learning. The results demonstrated improved server utilization while maintaining required service level agreement for high priority VNFs.
Closed Loop Platform Automation - Tong Zhong & Emma CollinsLiz Warner
Closed-loop automation would dramatically help with the network transformation which is central to our business. Building a general analytics workflow to support various use cases (such as power management, fault prediction, networking slicing, etc.) is a critical component in the overall platform.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
6. Accelerating Services innovation for
visual cloudOpen Visual Cloud Project
6
FOR MORE INFORMATION VISIT: https://01.org/openvisualcloud
* Targeted for open source in 2H’2019
FFMpeg, GStreamer, TensorFlow*, MXNet*, Caffe*, OpenCL*
Intel® OpenVINO™ Toolkit Intel® Rendering Framework
7. *Other names and brands may be claimed as the property of others.
SOFTWARE: CONVERGE THE
WORKLOADS
7
1. Proven Dataplane acceleration technologies in
network platforms
2. Integration of analytics, media, and
networking SW technologies to
ease developer adoption and programmability
3. Leveraging and contributing to industry
standard interfaces and open
source software
USE RICH AND FLEXIBLE SOFTWARE FRAMEWORKS FOR
FASTER CUSTOMER SOLUTION READINESS & DEPLOYMENTS
NETWORK PLATFORMS
APPLICATIONWORKLOADCONVERGENCE
Industry Standard Interfaces for
Efficient, Programmable, Scalable Data Plane
(e.g. DPDK, Open vSwitch)
Application & Service Orchestration/virtualization
Intel® Xeon™
Processors
Intel® Atom™
Processors
Intel®
FPGA
Intel® Ethernet
Controller
Open Visual
Cloud
Network Edge
SW
RAN SW
(e.g. ADK,
FlexRAN)
Network
Functions
(e.g. CDN,
EPC)
Intel®
Movidius
™ VPU
Intel®
core™
Processors
Developer Edge Frameworks (eg. AWS*, Azure*,Baidu*, Alibaba*)
Services (IOT Verticals, Comms, Cloud, Enterprises)
Intel®
Optane™ DC
Persistent
Memory
10. Prepare Use case Focused
Software Stack
Build in Cloud/Local
git clone https://github.com/OpenVisualCloud/Dockerfiles
cd Dockerfiles
mkdir build
cd build
cmake ..
cd Xeon/centos-7.6/ffmpeg
make
Use Case Image Name Platform OS
Media ffmpeg Intel® Xeon
Intel® Xeon E3
Intel® VCA2
Ubuntu* 16.04
Ubuntu 18.04
CentOS* 7.4
CentOS 7.5
CentOS 7.6
gst
nginx
Analytics ffmpeg
gst
Graphics ospray
ospary-mpi
11. • SVT introduces novel standard, codec
agnostic architectural features and algorithm to
develop optimized encoder.
• Developed to increase the scalability of the
core encoder and improve its tradeoffs
between performance and visual quality.
• Main Architectural features:
o Human Visual System (HVS)-optimized
classification
o Resource adaptive scalability.
o Three Dimensional Parallelism
Integrate Powerful Ingredients – Scalable
VIDEO TECHNOLOGY (SVT)
11
https://01.org/svt
13. Encode with SVT-HEVC
13
Exercise 2: Encode With SVT-HEVC
Use SVT encoder app:
cd home
SvtHevcEncApp -i travel6.yuv -w 1920 -h 1080 -b travel_hevc.ivf .
Use FFmpeg:
cd home
ffmpeg -i travel6.mp4 -c:v libsvt_hevc -y travel6_hevc.mp4
ffprobe -v error -show_streams travel6_hevc.mp4
14. Encode with SVT-AV1
14
Use SVT encoding app:
cd home
SvtAv1EncApp -i travel6.yuv -w 1920 -h 1080 -b travel6_av1.ivf .
Use FFmpeg:
cd home
ffmpeg -i travel6.mp4 -c:v libsvt_av1 -y travel6_av1.mp4
ffprobe -v error -show_streams travel6_av1.mp4
Exercise 3: Encode With SVT-AV1
15. 15
Review Question 1
What are the 5 major services in Open Visual Cloud?
Media Creation and Delivery
Media Analytics
Immersive Media
Cloud Gaming
Cloud Graphics
16. 16
Review Question 2
What are the 4 core building blocks in the Open Visual Cloud?
Encode
Inference
Decode
Render
17. REVIEW QUESTION 3
What all codecs does Intel support today under Scalable Video Technology(SVT)
Architecture?
AV1
HEVC
VP9
17
24. 24
Try Open Visual Cloud at https://github.com/OpenVisualCloud.
Post your Open Visual Cloud demos and projects to Developer Mesh
and apply to be an Intel Innovator! https://devmesh.intel.com
Participate by submitting feedbacks, bugs, and feature requests.
Contribute to the Open Visual Cloud development.
Learn more at https://01.org/openvisualcloud
Call for Action
Media is undergoing a rapid evolution. It’s no longer about experiencing streamed content over the television in your living room. The content is becoming richer and much more interactive. It’s delivered globally and often with increasing amount of intelligence for personalization and relevance. We are moving from a passive consumption of media to highly immersive and intelligent visual experiences. The visual experiences of tomorrow are no longer constrained by the definition of Media as we understand it today. So lets’ look what are these visual experiences.
We typically think of Media as ‘media processing and delivery’ where content is streamed whether it’s video on demand or live streaming etc. While this is still a large part of the opportunity Media is rapidly becoming much more. It encompasses Media Analytics– where content is analyzed to deliver experiences that are much more intelligent, localized personalized and relevant. (Eg. ad insertion). Immersive Media– where content is highly immersive, augmented as if you not just viewing the content but part of the experience. (Eg. Live 360 degree VR streaming of a sporting event). Cloud Graphics– where compute and graphically rich content is made available remotely whether it’s within an enterprise for increased productivity (Eg. training, diagnostics, 3D modeling and simulations) or for delivery of life like ray traced images (Eg. rendering of movies). Cloud Gaming– where end users can experience rich, interactive and highly immersive games anytime, anywhere and on any connected device. They are no longer bound to their playstations and desktops to play rich, highly interactive games. While Media will remain the underpinnings of Visual Cloud, the term ‘Media’ is no longer descriptive of what the industry needs to deliver. The industry needs a new term to define these new experiences so that we have a common understanding of what this new era of media is. Intel in collaboration with our key partners has been using the term Visual Cloud to define this Media of the future and we are putting it out for the industry to adopt.
Now looking at the building block which will deliver these services. Decode and encode have always been the foundational building blocks of Media delivery but going forward we need additional building blocks like render and inference. hence we have 4 main building blocks here – encode decode, render and inference.
Deployment of visual experiences of tomorrow requires four core building blocks to enable these five major services and unleash innovation via infinite use cases. It will happens based on which of these four core building blocks are selected and how they are sequenced. For streaming all you require is to decode the content and then encode for the target device. Now the content is not just delivered to you – it’s increasingly personalized and localized to uniquely address the user. Whether the provider is trying to recommend content based on your viewing habits or user profile it requires intelligence and analytics. This drives the need for inference as another core building block. Hence for analytics pipeline you would need to decode the content, perform inference, take the necessary action and then encode it before sending the data along.
The studios are increasingly relying on graphically rendered movies that break through the bounds of imagination and are realistic and immersive which will require us to render the content and encode it to send. Intent of Visual Cloud is to provide right technology ingredients and interoperability across these four core building blocks.
It’s not just about aggregating the Intel assets to address this new Visual Cloud market opportunity. We are aggressively making investments to enable a platform that is targeted and optimized for the Visual workloads. We are investing in technology leadership where we are establishing a common reference architecture for Visual Cloud that is scalable.
Not only are we defining the HW/SW elements of the platform we are also identifying gaps and aggressively working towards bridging them. For example we are ensuring a rich portfolio of software for the four core building blocks (encode, decode, inference and render) that is interoperable, scales across the hardware offerings and supports standard industry frameworks for scalability. For time to market we are enabling the Intel Select solutions for Visual Cloud which offer BKCs that are optimized for target use cases and delivered via ODMs and OEMs.
Finally, we are launching Open Visual Cloud which builds on the industry standards based framework, optimized and interoperable software ingredients to release reference pipelines for key target use cases. So, Intel is driving technology to deliver a scalable and optimized reference architecture for Visual Cloud, ensuring it’s standards based, runs best on Intel architecture and is easy to commercialize. it ensure cost effective solution for the service providers and developers to innovate and deploy visual services. The investments we make will enable you to focus your investments on rapid services deployment rather than platform development.
Lets zoom in to software workloads here..
Intel is seeding the industry with a project we call Open Visual Cloud; a set of pre-defined reference pipelines for various target visual cloud services. Open Visual Cloud reference pipelines are based on existing Intel-optimized open source ingredients across the four core building blocks. Under decode we have interoperablity with industry standard like x264 and x265 along with introducing AV1 under SVT architecture. Inference is supported though Open VINO - Open Visual Inferencing and Neural Network Optimization. For encode we are continuously adding more and more codecs under scalabale video technology for improved video quality. Ultimately all these ingredients will support open source industry frameworks like ffmpeg, tensorflow etc. Finally in Open Visual Cloud, Intel is offering reference pipelines which shows how all these blocks are interoperable and can be scaled for future requirements. Today we are offering two reference pipeline CDN transcode under media processing and delivery at edge and Smart ad insertion under media analytic services, and intend to put quarterly updates on new pipelines and continue optimizing existing ones. Aim of Open Visual Cloud is to enable the ecosystem including ISVs, Next wave Service Providers and Communications Service Providers etc to accelerate the pace of their visual cloud services innovation
Lets take it to next level and look at optimized software ingredients…
Open visual cloud is Intel’s approach to support these powerful building blocks. SVT, sets of codecs, HW acc through Media SDK, OpenVINO are powerful engines within the OVC software stack to drive the features. So let’s look closely to each one of them. SVT is an architecture which is codec agnostic, it’s a foundational building block for encode and decode. It is designed to get highest performance for better or equal VQ. The objective of the open source Scalable Video Technology (SVT) project is to provide flexible high-performance software encoder core libraries for media and visual cloud developers. Such libraries will serve as a starting point for developers to build faster and higher-quality full-feature encoder products. SVT is designed for cloud-native scalability, and it provides outstanding tradeoffs between visual quality and performance, for both VOD and live usecase. There is also HW acceleration libraries for decode through Intel Media SDK or quick sync video has been traditionally there for all the integrated graphics platform. We have continued evolve this product.
The OpenVINO toolkit offers software developers a single toolkit for applications wanting human-like vision capabilities. It does this by supporting deep learning through Deep Learning Deployment toolkit, which is an inference toolkit, computer vision, optimized functions for OpenCV and OpenVX, and hardware acceleration with heterogeneous support, all in a single toolkit. Aim of Open VINO is to offer open source software that helps developers and data scientists speed up computer vision and deep learning workloads, and enable easy, heterogeneous execution by supporting all HW plugins across Intel® platforms from edge to cloud. Intel® Rendering Framework is a software defined visualization (SDVis) approach for supporting big data use on platforms of all sizes, including cloud and high-performance computing (HPC) clusters. The framework provides SW optimized raytracing and rasterization.
All the software or building block ingredients we saw in last slide are interoperable and very well integrated with the existing industry framework, ultimately leveraging and benefitting the ecosystem. FFMPEG, GSTREAMERS are the high-level interfaces that OVC promotes to speed up development.
Under media, we are leveraging our existing media investment, upstreamed svt-hevc, hw accelerated codecs into ffmpeg and continue to invest more. With that SVT architecture improvements have been upstreamed to x265 as well.
Under Inference, our deep learning framework interoperability allows for reuse of different neural network model like TensorFlow, MXNet, Café, etc. . We are upstreaming deep learning framework to ffmpeg and gstreamer interfaces, also investing in the making the underneath HW and SW libraries and plugins interoperable directly with neural network.
Intent is for developers to benefit from Intel’s upstreamed optimized software and work at interface level when needed which enables quicker time to market. Companies can built this with confidence as Intel’s will contribute both SW and HW plugins.
DCG instructor presents slide ( 3 minutes)
Open source initiatives are a key component of Intel helping both our partners and our end customers benefiting from our broad Hardware roadmap offering. We are active code contributors within multiple community projects to provide a platform foundation hosting a broad set of edge computing applications within a performant, secured, and orchestrated environment.
Right side is open source projects related to the lower layers of the stack: OSes, networking stacks. We contribute code in many projects including base OS enhancements to accommodate our new platforms; DPDK, OvS, FD.io, and Hyperscan for high performing dataplane. DPDK is a set of optimized software libraries and drivers that Intel invented back in 2010. This is to provide acceleration on packet processing on general purpose CPUs. We also contribute to virtual infrastructure managers such as Openstack and Kubernetes; lifecycle management of services at ONAP, Network controllers at Open Daylight and Tungsten Fabric, as well as the emerging Akraino project for Edge stack solutions.
In support of the deployments at the edge, we have investments across different OS variants including Yocto and Clear Linux. This, when combined with Xeon at the edge, enables many of the same tools and innovations from the datacenter to make their way to the edge.
The left side is open source projects we are focused on and/or contributing to related to the upper layers of the stack. Some are AI/CV-related, some are virtualization/containerization-related (aka kubernetes) others are networking related
Above that are the workloads we are converging.
This could be in an industrial setting where PLCs are being consolidated: for example Schneider Electric/Advantech solar plant pilot where thousands of heliostat controllers (to direct the solar panels towards the sun) were previously controlled by PLCs (100 heliostats per PLC), but in the pilot we have virtualized and consolidated 200 PLCs (all individual HW failure points with no failover) into 6 Xeon servers.
Or it could be in a network/NGCO setting: a central office server can be running networking functions but given its proximity to the on-prem edge could conceivably also be used with OpenVINO for deep learning applications someday.
What role do you think our software offerings play in getting us the deal win?
Speaker notes:
RAN Speaker notes: Content for ASSP and Custom
Open Visual Cloud:
Will launch in Q2 ’19
Portability across CPUs, GPUs, and Accelerators,
End-to-end reference pipelines for easy commercialization
So we have all these software's available in opensource available under github repository. Under Open Visual Cloud we are releasing a set of building blocks and reference pipelines in this repository along with the dockerfiles support. Can use the dockerfile(s) in the project or as a reference point for bare metal installation. One thing I will like to emphasize here is with increasing set of software, and hardware.. It gets more complex to maintain all the software, install and use. So here Intel is doing all the hardwork require to provide ease to developer through dockerfiles.
Lets look underneath these repository to see what we support.…
Building OVC software stack is easy if your app is docker based. However, it is not required. The docker instructions provide the exact steps if you want to install on bare metal.
We support multiple images for different software stacks supporting various Oses like multiple version of Ubuntu, CentOS:
FFMPEG: software stack optimized for media creation and delivery, based on FFMPEG.
Included codecs:, x265, vp8/9, av1 and SVT-HEVC. The GPU images are accelerated with vaapi and qsv
GSTREAMER: software stack optimized for media creation and delivery, based on GSTREAMER.
DLDT+FFMPEG: software stack optimized for media analytics., based of the FFmpeg framework. Includes Inferencing engine and tracking plugins.
DLDT+GSTREAMER: software stack optimized for media analytics, based on GSTREAMER.
FFMPEG+GSTREAMER+DEV: The development image that can be used to compile C++ application, for all above usages.
NGINX+RTMP: software stack optimized for web hosting and caching, developed for microservices and CDN. Based on FFmpeg, included NGINX the web server and RTMP the RTMP, DASH and HLS streaming module.
osray: software stack optimized for ray tracing development. Based on embree, included ospray Ray Tracing engine and examples.
What these abbreviation means here are that V stands for Tested and Verified by Intel, T are for Tested but some test didn’t pass versus compiled is that image exist but we haven’t tested yet. Intention to put this out is to be transparent. With that, let’s jump to our first exercise where I will like to show how simple it is to clone these docker files.
As we briefly touched upon SVT previously, lets look here more closely. SVT is an architecture, codec agnostic provides higher or equivalent VQ quality.
Scalable Video Technology (SVT) is a software-based video coding that allows encoders to take advantage of best possible tradeoffs to scale their performance levels given the quality and latency requirements of the target applications through the multiple presets available M0 to M12. The efficiency and scalability of SVT are enabled through mainly architectural and algorithmic features through three-dimensional parallelism, HVS optimized classification, and Resource adaptive scalability.
SVT supports process-based parallelism, which involves the splitting of the encoding operation into a set of independent encoding processes, where partitioning/mode decisions and normative encoding/ decoding are de-coupled.
SVT also supports picture-based parallelism through the use of hierarchical GOP structures.
Most importantly, however, is SVT’s segment-based parallelism, which involves the splitting of each picture into segments and processing multiple segments of a picture in parallel to achieve better utilization of the computational resources with no loss in video quality
SVT is one of the powerful architecture under OVC provides amazingly fast encoding on Intel Xeon platforms. This architectural solution is available for HEVC, VP9 and AV1. It is fully open sourced and plugin into both ffmpeg and gstreamer framework.
Performance of SVT.
Under OVC we have created reference pipeline so lets look at the very first solution. This reference pipeline is architect to demonstrate FFMPEG RTMP streaming, FFMPEG 1:1 and 1:N transcoding and CDN NGINX caching service. Some of the common benefits of CDN pipeline is to reduces latency delays and the end user experience is fast
We will focus on the highlighted section here – where a live video from streaming server is pushed into Transcode Server over RTMP protocol. The Transcode Server receives the video stream over rtmp, decapsulate and demux the video, transcode to other codecs/bitrate/resolution, in 1:N manner, which means one input and N output. As you see transcoding first channel to 1080p at 60fps, second channel to 1280p 60fps and so on to have N outputs. The transcoded video streams is muxed over rtmp, be distributed to the CDN Edge Server over CDN network, accordingly to the decision from CDN Manager, whose role is to role is to schedule jobs., manage parallized task execution. Now the CDN Edge server receives the video streams from Transcode Server, cache the streams and push the steams into various clients over rtmp. we have used ffmpeg to decode, transcode and rtmp streaming, with that transcoding can be done using SW or HW codec like SVT/x264/x265 or qsv which is all available through ffmpeg. Doing 1:N transcoding drastically improves the performance by running pipelines in parallel using ffmpeg. The eliminate the necessity to program at lower and write lines of code whenever possible.
Nginx – web server -, inside module is RTMP – HLS/dash segments to browser. CDN manager
The AD-Insertion sample demos the AD insertion usage. The server-side AD insertion solution provides multiple benefits like
improve ad viewing experience : studies have been done if the ad match the content being watched there is higher viewer engagement by clicking the ad or not skipping it.
regional ad customization : Content providers are national wide. Replace default ads with localized ads through server side ad insertion, this enables replacing the ads.
The Content Provider service serves original content, with on-demand transcoding, through the DASH or HLS streaming protocol. The AD Insertion service analyzes and inserts AD, with transcoding if needed, into the video stream at each AD break slot. The client player is based on dash.js and hls.js. So here are three main blocks which ad analytics to the existing framework.
The AD Content service archives the AD videos and serves them upon request.
The AD Insertion service implements the logic of inserting AD during video playback
The Ad decision service makes decision on what AD to show in the next AD break, and returns the AD URL. The decision is based on combination of user AD preference and available cues, results from analyzing the video content.
Lets look in depth how the flow works..
The under-the-hood design shows where the OVC pipelines reside:
The client player starts video playback by requesting the video manifest file, which describes the DASH/HLS segments.
The AD Insertion service intercepts the request, retrieves the manifest. The AD Insertion service also schedules two pipelines, first to analyze the video segments, and to construct AD segments. The analyzed results are saved to the database for later use. The constructed AD segments will be sent to the client player upon request.
The AD Insertion service keeps track of how many AD segments are served to the client player and reports the statistics to the AD Content service. If the user clicks on any portion of the playback screen, the AD Insertion service interpret the click to be either an AD click or a question/answer click. Report the click to the AD Content service or the AD Decision service for further action.
The analytics pipeline is used to analyze the video content. We provide two equivalent implementations based FFMPEG and GSTREAMER. They run side by side or standalone as you prefer.
The transcoding pipeline is used to transcode ADs to match the video quality (bitrate and resolution.) The transcoding pipeline is based on FFMPEG.
Analytics is powered by OpenVINO.
Now let’s look at how we can simply run this through ffmpeg/gstreamer framework.
So OVC provides ffmpeg inference plugins that let you perform end to end task like object detection, face detection and emotion detection through cmd line. Simply call the corresponding ffmpeg functions. The slide shows some examples. Lets look at first example which shows face detection followed by emotion recognition which is relevant for understanding the emotion of the video to insert an appropriate ad content.
Message here is we don’t have to program to do run complex analytics job, there is cmd line options available to run the workload. This reduce the number of hours spend to develop a solution and optimize performance, weight and models, simply use one of the pretrained model part of the industry standard framework.
OVC provides similar features with GSTREAMER as well. The application developers can use the plugins to construct complex pipelines for media analytics use cases.
With support of two different framework – ffmpeg and gstreamer we have made sure model support underneath are same, metadata format is exactly so if you want to switch between two framework it should require minimal changes at cmd line level.
In the end I will like to conclude with few takeaways Open Visual is a project which goes beyond traditional media delivery to offer reference solution and pipelines to interesting usecase with intention to make the software interoperate well with standard Industry framework which in turn reduce the development time by months. All these software are opensource today at github.com/OpenVisual Cloud with simple docker images.
Also, participate by by submitting feedbacks, bugs, and feature requests. And Contribute to enhance Open Visual