For many years it has been possible to access graphical application via remote desktop software. In recent years Cloud computing has become more prominent and is a crucial
computing paradigm.
Android has captured a large market share. The challenge addressed in this talk is to efficiently export Android graphics so as to support standard Android apps remotely.
More information can be found at: http://www.ascender.com/remote-graphics
We have developed technology that allows a wide range of graphical interfaces to be streamed efficiently over wide area networks. This is an enabling technology that enables remote graphics akin to the way MPEG compression enables video streaming.
For many years it has been possible to access graphical application via remote desktop software. In recent years Cloud computing has become more prominent and is a crucial
computing paradigm.
Android has captured a large market share. The challenge addressed in this talk is to efficiently export Android graphics so as to support standard Android apps remotely.
More information can be found at: http://www.ascender.com/remote-graphics
Checklist AR is an application which improves the maintenance processes by augmenting the physical world with digital content. This application guides the workers through a list of real-world checkpoints, providing relevant information at the right physical place. During this session, you will be able to learn more about the solution, the technologies that we used, the challenges we had to overcome and some best practices when developing AR applications for Magic Leap. Additionally, you will be able to experience the Magic Leap One device and learn more about its possibilities.
Building Mixed reality with the new capabilities in UnityWindows Developer
Unity has best in class support for targeting Windows Mixed reality devices, whether an immersive device or a HoloLens. Unity added support for Windows Mixed Reality last fall in 2017.2 and now with a new wealth of capabilities. In this talk, we will touch on several of these features and capabilities, including: IL2CPP, Application-based holographic remoting, Best practices for input and stereo rendering options.
We have developed technology that allows a wide range of graphical interfaces to be streamed efficiently over wide area networks. This is an enabling technology that enables remote graphics akin to the way MPEG compression enables video streaming.
For many years it has been possible to access graphical application via remote desktop software. In recent years Cloud computing has become more prominent and is a crucial
computing paradigm.
Android has captured a large market share. The challenge addressed in this talk is to efficiently export Android graphics so as to support standard Android apps remotely.
More information can be found at: http://www.ascender.com/remote-graphics
Checklist AR is an application which improves the maintenance processes by augmenting the physical world with digital content. This application guides the workers through a list of real-world checkpoints, providing relevant information at the right physical place. During this session, you will be able to learn more about the solution, the technologies that we used, the challenges we had to overcome and some best practices when developing AR applications for Magic Leap. Additionally, you will be able to experience the Magic Leap One device and learn more about its possibilities.
Building Mixed reality with the new capabilities in UnityWindows Developer
Unity has best in class support for targeting Windows Mixed reality devices, whether an immersive device or a HoloLens. Unity added support for Windows Mixed Reality last fall in 2017.2 and now with a new wealth of capabilities. In this talk, we will touch on several of these features and capabilities, including: IL2CPP, Application-based holographic remoting, Best practices for input and stereo rendering options.
Global AI on Virtual Tour Oslo - Anomaly Detection using ML.Net on a drone te...Bruno Capuano
Slides used during the session "Anomaly Detection using ML.Net on a drone telemetry from Azure IoT" for the Global AI on Virtual Tour - Oslo on June 2021
This talk was given at CESEC 2015 which is a Summer School on Critical Embedded Systems: http://cesec2015.sciencesconf.org/
These slides present the Arduino Designer why and how we develop it with Eclipse Sirius:
http://www.eclipse.org/sirius/
It presents also the Eclipse Sirius Animation feature:
http://www.eclipse.org/sirius/lab.html
Mobile Fest 2018. Enrique López Mañas. TensorFlow for Mobile PoetsMobileFest2018
There is a lot of Hype with ML and AI lately, and TensorFlow is the framework of choice from Google. But as a Mobile Developer you might have asked yourself, how can I benefit from it? In this talk, you will learn your first steps into the fascinating ML world for mobile.During this talk I will show how to set up TensorFlow for Android, and how to perform some initial operations with it:
- Classifying example
- Detection example
- Analyzing example
I will also show a model to develop a Mobile App with a Model and integration TensorFlow (from Zero to App), showcase demos in Android, iOS and Raspberry and showing as well some Real Use Cases.
Learn about the Windows 10 on ARM devices, how the magic of x86 emulation works, and finally how to submit and build apps for Windows 10 on ARM. We will be showing how to build ARM64 apps for desktop and UWP.
Develop Industrial Mixed Reality applications with Unity Windows Developer
Mixed Reality is the next computing platform, and with Unity, you can create stunning 3D experiences, impactful apps, and grow your business. We will show you how Unity is used outside of games in different industrial businesses, and how you can benefit from our rich toolset to create your apps. You will learn how easy it is to integrate existing 3D data like CAD, and target VR and AR at the same time. Unity is here to democratize development, enable you to succeed and help you to solve hard problems. Mixed Reality is already a significant market and will continue to grow.
Kubernetes based connected vehicle platform #k8sjp_t1 #k8sjp Kenta Suzuki
KubeFest Tokyo 2020
https://k8sjp.github.io/kubefest-2020/
https://www.youtube.com/watch?v=2x7jQTBUT5w&feature=youtu.be&list=PLm3Ufa7bsgpyN_UGH7TnOfg-XynZHRlqL&fbclid=IwAR2dkSFwBKkGr97-2IqKyjZ3i7yQdD1CoQvh6s1zbbI7fr-V86seqwaQMzI
EclipseCon NA 2015 - Arduino designer : the making of!melbats
Video : http://www.infoq.com/presentations/arduino-designer
What brings together a model, a microcontroller and a cat ?
The Arduino Designer ! Last year, we demoed this new modeler which allows kids to easily write programs for Arduino platforms using a visual tool. The purpose of this new talk is to unveil the making-of of this modeler, by detailing how it is possible to quickly develop such a simple dedicated modeler thanks to Sirius.
We’ll start by explaining how to use Sirius to create graphical editors such as the ones provided by the Arduino Designer. Then we’ll see how to simplify the Eclipse UI to keep the minimum useful interactions for an RCP application dedicated to kids. Finally, we’ll discuss how to integrate the modeler with a code generator and how to combine it with the Arduino tools to build and upload software into the Arduino hardware platform.
Join this session, and discover the power of graphical designers, the simplicity of creating new ones and integrating them with existing tools!
Cross Platform Mobile Development with Visual Studio 2015 and C++Richard Thomson
Utah Code Camp, Spring 2016. http://utahcodecamp.com In this presentation, I give an overview of using Visual Studio 2015 for cross-platform development in C++.
Hacking with the Raspberry Pi and Windows 10 IoT CoreNick Landry
Did you know that Windows 10 can run on a $35 Raspberry Pi 2 (or 3) single-board computer? Makers have taken the world by storm, creating countless gadgets and automated systems, connecting everything around them. This session is for makers – neophytes and veterans alike – who want to explore the capabilities of Windows 10 IoT Core to build hacks based on the Universal Windows Platform (UWP), basically attaching electronic sensors and outputs to their Windows 10 apps. We’ll learn about the tools, how to get started, what hardware you’ll need, and how to build your first Windows hardware project on the Raspberry Pi. Take your maker projects to the next level, and come learn valuable skills to prepare and extend your developer skills for the Internet of Things (IoT).
On January 11, 2018, Sony Corporation released aibo (https://aibo.sony.jp/). aibo that is back on market beyond the time of 12 years constructed via robotics framework named ROS. In this presentation, we introduce examples of development in aibo from the point of view of ROS, starting with introduction of aibo, architecture, embedded technology, real-time optimization, robot development environment, simulation etc.
Qualcomm® Snapdragon™ processors, a product of Qualcomm Technologies, Inc., boast a long list of technologies, from the CPU and GPU, to audio, video, display, networking and much more. In this session, you’ll learn how to take advantage of these features and technologies to create the best gaming experiences, including all the available tools.
Watch this presentation on YouTube: https://www.youtube.com/watch?v=NhbZK_5na7U&list=PLxeazpXYyqtNm2EnCbfSzy7aKOkHjiaSi&index=31
Learn more about developing mobile apps for devices powered by Snapdragon processors: https://developer.qualcomm.com/mobile-development/maximize-hardware/mobile-gaming-graphics-adreno
Global AI on Virtual Tour Oslo - Anomaly Detection using ML.Net on a drone te...Bruno Capuano
Slides used during the session "Anomaly Detection using ML.Net on a drone telemetry from Azure IoT" for the Global AI on Virtual Tour - Oslo on June 2021
This talk was given at CESEC 2015 which is a Summer School on Critical Embedded Systems: http://cesec2015.sciencesconf.org/
These slides present the Arduino Designer why and how we develop it with Eclipse Sirius:
http://www.eclipse.org/sirius/
It presents also the Eclipse Sirius Animation feature:
http://www.eclipse.org/sirius/lab.html
Mobile Fest 2018. Enrique López Mañas. TensorFlow for Mobile PoetsMobileFest2018
There is a lot of Hype with ML and AI lately, and TensorFlow is the framework of choice from Google. But as a Mobile Developer you might have asked yourself, how can I benefit from it? In this talk, you will learn your first steps into the fascinating ML world for mobile.During this talk I will show how to set up TensorFlow for Android, and how to perform some initial operations with it:
- Classifying example
- Detection example
- Analyzing example
I will also show a model to develop a Mobile App with a Model and integration TensorFlow (from Zero to App), showcase demos in Android, iOS and Raspberry and showing as well some Real Use Cases.
Learn about the Windows 10 on ARM devices, how the magic of x86 emulation works, and finally how to submit and build apps for Windows 10 on ARM. We will be showing how to build ARM64 apps for desktop and UWP.
Develop Industrial Mixed Reality applications with Unity Windows Developer
Mixed Reality is the next computing platform, and with Unity, you can create stunning 3D experiences, impactful apps, and grow your business. We will show you how Unity is used outside of games in different industrial businesses, and how you can benefit from our rich toolset to create your apps. You will learn how easy it is to integrate existing 3D data like CAD, and target VR and AR at the same time. Unity is here to democratize development, enable you to succeed and help you to solve hard problems. Mixed Reality is already a significant market and will continue to grow.
Kubernetes based connected vehicle platform #k8sjp_t1 #k8sjp Kenta Suzuki
KubeFest Tokyo 2020
https://k8sjp.github.io/kubefest-2020/
https://www.youtube.com/watch?v=2x7jQTBUT5w&feature=youtu.be&list=PLm3Ufa7bsgpyN_UGH7TnOfg-XynZHRlqL&fbclid=IwAR2dkSFwBKkGr97-2IqKyjZ3i7yQdD1CoQvh6s1zbbI7fr-V86seqwaQMzI
EclipseCon NA 2015 - Arduino designer : the making of!melbats
Video : http://www.infoq.com/presentations/arduino-designer
What brings together a model, a microcontroller and a cat ?
The Arduino Designer ! Last year, we demoed this new modeler which allows kids to easily write programs for Arduino platforms using a visual tool. The purpose of this new talk is to unveil the making-of of this modeler, by detailing how it is possible to quickly develop such a simple dedicated modeler thanks to Sirius.
We’ll start by explaining how to use Sirius to create graphical editors such as the ones provided by the Arduino Designer. Then we’ll see how to simplify the Eclipse UI to keep the minimum useful interactions for an RCP application dedicated to kids. Finally, we’ll discuss how to integrate the modeler with a code generator and how to combine it with the Arduino tools to build and upload software into the Arduino hardware platform.
Join this session, and discover the power of graphical designers, the simplicity of creating new ones and integrating them with existing tools!
Cross Platform Mobile Development with Visual Studio 2015 and C++Richard Thomson
Utah Code Camp, Spring 2016. http://utahcodecamp.com In this presentation, I give an overview of using Visual Studio 2015 for cross-platform development in C++.
Hacking with the Raspberry Pi and Windows 10 IoT CoreNick Landry
Did you know that Windows 10 can run on a $35 Raspberry Pi 2 (or 3) single-board computer? Makers have taken the world by storm, creating countless gadgets and automated systems, connecting everything around them. This session is for makers – neophytes and veterans alike – who want to explore the capabilities of Windows 10 IoT Core to build hacks based on the Universal Windows Platform (UWP), basically attaching electronic sensors and outputs to their Windows 10 apps. We’ll learn about the tools, how to get started, what hardware you’ll need, and how to build your first Windows hardware project on the Raspberry Pi. Take your maker projects to the next level, and come learn valuable skills to prepare and extend your developer skills for the Internet of Things (IoT).
On January 11, 2018, Sony Corporation released aibo (https://aibo.sony.jp/). aibo that is back on market beyond the time of 12 years constructed via robotics framework named ROS. In this presentation, we introduce examples of development in aibo from the point of view of ROS, starting with introduction of aibo, architecture, embedded technology, real-time optimization, robot development environment, simulation etc.
Qualcomm® Snapdragon™ processors, a product of Qualcomm Technologies, Inc., boast a long list of technologies, from the CPU and GPU, to audio, video, display, networking and much more. In this session, you’ll learn how to take advantage of these features and technologies to create the best gaming experiences, including all the available tools.
Watch this presentation on YouTube: https://www.youtube.com/watch?v=NhbZK_5na7U&list=PLxeazpXYyqtNm2EnCbfSzy7aKOkHjiaSi&index=31
Learn more about developing mobile apps for devices powered by Snapdragon processors: https://developer.qualcomm.com/mobile-development/maximize-hardware/mobile-gaming-graphics-adreno
Cloud Graphical Rendering: A New ParadigmJoel Isaacson
Cloud rendering of modern graphics is typically performed via remote hardware rendering and pixel-based video compression techniques for image transmission. These solutions perform poorly, profligately expending both system and network resources. In response, Ascender Technologies developed novel enabling technology where the rendering of pixels is performed only on the local client, which makes for a much more affordable solution without expensive graphical hardware in the cloud. In addition, Ascender’s compression techniques reduce the networking overhead, typically by over an order of magnitude.
Raheel Khalid (Envrmnt by Verizon): Cloud XR Experience on 5G with Mobile Edg...AugmentedWorldExpo
A talk from the Inspire Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Raheel Khalid (Envrmnt by Verizon): Cloud XR Experience on 5G with Mobile Edge Networks
Augmented and Virtual Reality has a baseline for low latency and high-fidelity graphics which forces users into long install times and longer downloads. In this session, we will explore how Envrmnt built a pipeline for streaming 3d mesh data, textures, animations and more. We'll review our platform for delivery, distribution and real-time updates that scales to hundreds of thousands of concurrent users and is accelerated by 5G and Verizon's Mobile Edge Cloud Computing Platform.
http://AugmentedWorldExpo.com
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/jumpstart-your-edge-ai-vision-application-with-new-development-kits-from-avnet-a-presentation-from-avnet/
Monica Houston, Technical Solutions Manager at Avnet, presents the “Jumpstart Your Edge AI Vision Application with New Development Kits from Avnet” tutorial at the May 2022 Embedded Vision Summit.
Choosing the right processing solution for your embedded vision application can make or break your next development effort. This presentation introduces three next-generation embedded vision platforms from Avnet that enable camera-based AI at the edge, featuring the latest edge AI technical advances in processors from NXP, Renesas and Xilinx.
Houston discusses the strengths and distinctive features of each solution, highlighting the applications each solution is best optimized for. She also explores the new family of production-ready camera modules featured with these kits and provides guidance on selecting the appropriate camera features for your embedded application.
Building Applications with the Microsoft Kinect SDKDataLeader.io
David Silverlight's powerpoint presentation on the Kinect for Windows SDK. Feb. 29, 2012
NUI = Natural User Interface: it's an invisible interface, the content is the interface, removing the proxy, direct manipulation, gestural interfaces
Kinect for Windows SDK:
1. Kinect explorer
2. Installing & using the Kinect sensor
3. Setting up your dev environment
4. Skeletal tracking fundamentals
5. Working with depth data
6. Audio fundamentals
7. Camera fundamentals
AAA 3D GRAPHICS ON THE WEB WITH REACTJS + BABYLONJS + UNITY3D by Denis Radin ...DevClub_lv
Building photorealistic 3D experiences on the Web is a challenge. Making it with React is even harder but once you got there it pays off in many ways. This talk is about Evolution Gaming approach of working with 3D graphics on the Web using ReactJS with the goal to build the most sophisticated and expensive WebGL application ever created.
JS Fest 2019. Денис Радин. AAA 3D графика в Web с ReactJS, BabylonJS и Unity3DJSFestUA
Создать фотореалистичное 3D приложение для Web не просто. Сделать это с React еще сложнее, но окупается с лихвой если вы все таки справились. Этот доклад о том как Evolution Gaming использует WebGL и ReactJS для создания самого сложного и дорогого WebGL приложения из когда либо разработанных.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/06/tensilica-processor-cores-enable-sensor-fusion-for-robust-perception-a-presentation-from-cadence/
Amol Borkar, Product Marketing Director at Cadence, presents the “Tensilica Processor Cores Enable Sensor Fusion for Robust Perception” tutorial at the May 2023 Embedded Vision Summit.
Until recently, the majority of sensor-based AI processing used vision and speech inputs. Recently, we have begun to see radar, LiDAR, event-based image sensors and other types of sensors used in new AI applications. And, increasingly, system developers are incorporating multiple, heterogeneous sensors in their designs and utilizing sensor fusion techniques to enable more robust machine perception.
In this presentation, Borkar explores some of the heterogeneous sensor combinations and sensor fusion approaches that are gaining adoption in applications such as driver assistance and mobile robots. He also shows how the Cadence Tensilica ConnX and Vision processor IP core families and their associated software tools and libraries support sensor fusion applications with high performance, efficiency and ease of development.
AWS re:Invent 2016: Powering the Next Generation of Virtual Reality with Veri...Amazon Web Services
In six months, Verizon has built a best-in-class Augmented Reality and Virtual Reality (AR/VR) platform that streams HD video and game experiences using Amazon EC2 GPU Accelerated instances and CloudFront. Verizon will share their reference architecture and configuration best practices that enabled them to develop a massively scalable VR architecture that scales to support for 100K simultaneous HD video streams to customers around the globe.
The Intel NUC 12 Extreme Kit is a compact workstation that can handle compute...Principled Technologies
When we carried out representative tasks in 18 professional apps using the Intel Core i9-12900 processor-powered Intel NUC 12 Extreme Kit NUC12DCMi9, we experienced no crashes or issues
Microsoft Windows Phone 8 offers native code support that enables development and porting of high-performance games. This training-lab webinar will give you an overview of Windows Phone 8 capabilities that support complex games development. It also will introduce available tools and frameworks that increase developer productivity and will demonstrate a hands-on approach to games development with the Windows Phone SDK 8. By leveraging frameworks such as the Microsoft Direct3D API and support for popular physics and rendering engines, you can now create games with native performance as well as use your own or third-party engines and middleware for games development for Windows Phone 8 users.
review of factors affecting IoT system selection. for MVP phase and later phases. Computation, price, connectivity, open source support, development SDKs
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. The Problem
●
●
●
There are just too many pixels to simply
transmit over a long-haul network.
There are a number of techniques that have
been attempted.
They all entail some compromises:
–
Resolution
–
Accuracy
–
Frame Rate
–
Latency
Ascender Technologies
Ltd
Remote
Rendering
3. The Problem:
Pixel Count 2008-2011
Copyright Romain Guy, Chet Haas, Google I/O 2011
Ascender Technologies
Ltd
Remote
Rendering
6. Choosing How To
Export Graphics
●
Graphics can be exported from any of the four
levels of the graphics stack
–
–
Toolkit level
–
Rendering level
–
●
Application level
Pixel level
We choose to export the rendering level.
Ascender Technologies
Ltd
Remote
Rendering
7. Exporting The Toolkit
and App
●
●
●
It is technically very complex. Android, to date,
has 17 different toolkit API variants.
Every application can extend the toolkit with
custom widgets (subclasses of
android.view.View).
Clearly impossible.
Ascender Technologies
Ltd
Remote
Rendering
8. Exporting The Toolkit
and App
●
●
●
●
It is technically very complex. Android, to date,
has 17 different toolkit API variants.
Every application can extend the toolkit with
custom widgets (subclasses of
android.view.View).
Clearly impossible.
This talk will show that effectively exporting
graphics at the toolkit level and even the
application level is in fact possible via the
rendering API.
Ascender Technologies
Ltd
Remote
Rendering
14. ICS Rendering Results
●
●
●
Even with simple techniques the compression
ratio is over four orders of magnitude (x10,000
reduction).
The number of bytes per frame, for the GUI
rendering, is typically 300 bytes, as opposed to 416 Mbytes for uncompressed frames.
The compression encodes 2-4 rendering
operations per byte (2-4 bits per rendering
operation).
Ascender Technologies
Ltd
Remote
Rendering
15. The Google Play Universe
API Coverage
Ascender Technologies Ltd
Remote Rendering
16. Cloud Gaming
●
●
●
Currently cloud gaming is done with pixel
rendering performed on the remote server. The
frames are H264 encoded and sent over the
network to the remote client.
Our remote rendering technology does not need
special hardware on the server side. The
computational load on the server and network are
minimized.
Playing latency (lag) is minimal.
Ascender Technologies
Ltd
Remote
Rendering
17. A Perfect Storm
●
It seems that a technological cosmic alignment
has happened:
–
Fast, low-power 64 bit ARM multi-processors
(Cortex A50) with virtualization extensions.
–
Adoption of Android apps in a broad gamut of use
cases, including the enterprise.
–
Ever increasing adoption of cloud based solutions.
–
Possibility of efficiently transporting Android
graphics via a long haul network.
Ascender Technologies Ltd
Remote Rendering
Editor's Notes
For many years it has been possible to access graphical application via remote desktop software. In recent years Cloud computing has become more prominent and is a crucial computing paradigm.
Android has captured a large market share. The challenge addressed in this talk is to efficiently export Android graphics so as to support standard Android apps remotely.
Current techniques to provide remote graphic access are pixel based.
An example of pixel-based remote Android graphics is: Amazon's test drive, which allows remote demos of Android apps before purchase. Pixel based solutions force compromise on all four performance properties:
● Resolution ● Accuracy● Frame Rate● Latency
Our techniques allow un-compromised performance coupled with very low network bandwidth.
This slide was presented at Google I/O May 2011. It shows the increase of pixel count as opposed to memory bandwidth as a function of time. It was introduced to motivate use of hardware rendering (OpenGL) as opposed to software rendering (Skia), Guy and Haas argue that the memory bus is just too slow to allow software rendering.
The argument is much more powerful when applied to network bandwidth which is orders of magnitude slower than the internal memory bus.
Here the original slide (the blue rectangle) is updated to current display resolutions. In just a year and a half the number of pixels (e.g. Nexus 10) has increased by a factor of four. Both the internal memory bandwidth and the network bandwidth are only slowly improving. This increase in pixel counts makes remote Android graphics more challenging.
Another change that makes remote Android graphics even more difficult is the 60 frame/sec standard that has been adopted since ICS (Ice Cream Sandwich)..
Normally Android apps are installed locally on the local device. No remote bandwidth is needed to view the graphics. Remote graphics is typically done by exporting pixels at the Framebuffer level. For a 4 Mega pixel device (e.g. Nexus 10) at 60 fps a 1 Gbytes/sec network is needed. Even if a 100x (100 fold reduction of data volume) compression codec is used, a 10 Mbytes (80 Mbits)/sec network is needed.
As shown in the previous slide, the volume of network data needed to export the graphic stream increases as we approach the pixel level. The higher the layer exported, the more compact and efficient the graphical representation.
We use the rendering layer to export graphics. The volume of data needed is approximately 100 times less than the pixel level.
Exporting the app at the toolkit level would undoubtedly be more efficient, but a direct approach will not work. The toolkit is dynamically extensible and there is no way to reference on both the server and client side the same toolkit elements.
The data compression algorithm reduces the volume of data to less than the toolkit level. The rendering stream is scanned for sequences of commands that are reversed engineered into both application and toolkit level routines. These routines are entered into dictionaries shared by both the encoding (server) and decoding (client) ends. Long sequences of rendering commands are sent by simple reference to the dictionary entries from the server to the client.
More details can be found on the URL:
http://www.ascender.com/remote-graphics
There are four natural targets for exporting graphics in the rendering layer in the ICS graphic stack. We tested the first three ¬, , ® by building prototypes. The fourth target, ¯, is technically very similar to ¬.
The right branch of the rendering stack (¬, ) is part of Android since the Honeycomb version. It is usually called Hardware rendering.
The left branch of the rendering stack (®, ¯) was present in the Android graphic stack from its first release. It is usually called Software rendering.
Android allows native OpenGL apps to be written using the NDK. We remotely accessed these applications by using the our remote enabled OpenGL () rendering layer
Android allows native Skia (software rendering) apps to be written using the NDK. We remotely accessed these applications by using our remote enabled Skia (®) rendering layer.
This slide illustrates the systems architecture of the remote server and the local client. We send the graphic rendering from the server to the client in a purely simplex (one-way) connection. Thus, no round trip delays are incurred in the graphics streaming.
User interactions will cause round trip latencies.
This slide illustrates an important feature of our remote graphics system. Since the rendered pixels are not needed on the remote side only the upper part of the rendering interface need be executed on the remote end.
Thus for hardware rendering (OpenGL) the lower level, which is dependent on a hardware GPU, is not needed. This greatly reduces the cost of running the graphic stack on the remote side.
For software rendering (Skia) the lower level, which actually does the computationally intensive pixel rendering, is not needed. This greatly reduces the computational needs on the remote side.
The compression ratio can be understood to be a product of two factors:
1) The rendering layer is about 100 times more efficient for remote graphics than the pixel layer.
2) The compression routines add an additional factor of about 100.
We can thus render remotely at 60 fps with a bandwidth of typically less than 20 Kbytes/sec. With no compromise of
● Resolution ● Accuracy● Frame Rate● Latency
The reason that so many rendering API's are supported relates to coverage. In the context of an Android app store it is the percentage of apps that can be supported via remote rendering. You would like to remotely support a large percentage of unaltered apps as they currently exist in the app store.
The above Venn diagram illustrates the overlapping coverages for each rendering API. For example: to support Java ICS apps, which render to OpenGL ES 2.0., it is sufficient to support the yellow OpenGLRender API (¬). To support a Java Froyo app the green Canvas (¯) or blue Skia (®) API is sufficient. More sophisticated apps might need red OpenGL ES 2.0 API () support.
It is instructive to contrast our approach with the Nvidia Grid or OnLive cloud gaming systems. Both need expensive hardware and use a large amount of network bandwidth.
The enabling technologies that allow for Remote Android Graphics have many uses:
Cloud computing, remote app server
App library, subscription model
App demos
Remote enterprise applications
Set-top boxes
Cloud Gaming