For many years it has been possible to access graphical application via remote desktop software. In recent years Cloud computing has become more prominent and is a crucial
computing paradigm.
Android has captured a large market share. The challenge addressed in this talk is to efficiently export Android graphics so as to support standard Android apps remotely.
More information can be found at: http://www.ascender.com/remote-graphics
2. The Problem
●
●
●
There are just too many pixels to simply
transmit over a long-haul network.
There are a number of techniques that have
been attempted.
They all entail some compromises:
–
Resolution
–
Accuracy
–
Frame Rate
–
Latency
Ascender Technologies
Ltd
Remote
Rendering
3. The Problem:
Pixel Count 2008-2011
Copyright Romain Guy, Chet Haas, Google I/O 2011
Ascender Technologies
Ltd
Remote
Rendering
6. Choosing How To
Export Graphics
●
Graphics can be exported from any of the four
levels of the graphics stack
–
–
Toolkit level
–
Rendering level
–
●
Application level
Pixel level
We choose to export the rendering level.
Ascender Technologies
Ltd
Remote
Rendering
7. Exporting The Toolkit
and App
●
●
●
It is technically very complex. Android, to date,
has 17 different toolkit API variants.
Every application can extend the toolkit with
custom widgets (subclasses of
android.view.View).
Clearly impossible.
Ascender Technologies
Ltd
Remote
Rendering
8. Exporting The Toolkit
and App
●
●
●
●
It is technically very complex. Android, to date,
has 17 different toolkit API variants.
Every application can extend the toolkit with
custom widgets (subclasses of
android.view.View).
Clearly impossible.
This talk will show that effectively exporting
graphics at the toolkit level and even the
application level is in fact possible via the
rendering API.
Ascender Technologies
Ltd
Remote
Rendering
14. ICS Rendering Results
●
●
●
Even with simple techniques the compression
ratio is over four orders of magnitude (x10,000
reduction).
The number of bytes per frame, for the GUI
rendering, is typically 300 bytes, as opposed to 416 Mbytes for uncompressed frames.
The compression encodes 2-4 rendering
operations per byte (2-4 bits per rendering
operation).
Ascender Technologies
Ltd
Remote
Rendering
15. The Google Play Universe
API Coverage
Ascender Technologies Ltd
Remote Rendering
16. Cloud Gaming
●
●
●
Currently cloud gaming is done with pixel
rendering performed on the remote server. The
frames are H264 encoded and sent over the
network to the remote client.
Our remote rendering technology does not need
special hardware on the server side. The
computational load on the server and network are
minimized.
Playing latency (lag) is minimal.
Ascender Technologies
Ltd
Remote
Rendering
17. A Perfect Storm
●
It seems that a technological cosmic alignment
has happened:
–
Fast, low-power 64 bit ARM multi-processors
(Cortex A50) with virtualization extensions.
–
Adoption of Android apps in a broad gamut of use
cases, including the enterprise.
–
Ever increasing adoption of cloud based solutions.
–
Possibility of efficiently transporting Android
graphics via a long haul network.
Ascender Technologies Ltd
Remote Rendering
Editor's Notes
For many years it has been possible to access graphical application via remote desktop software. In recent years Cloud computing has become more prominent and is a crucial computing paradigm.
Android has captured a large market share. The challenge addressed in this talk is to efficiently export Android graphics so as to support standard Android apps remotely.
Current techniques to provide remote graphic access are pixel based.
An example of pixel-based remote Android graphics is: Amazon's test drive, which allows remote demos of Android apps before purchase. Pixel based solutions force compromise on all four performance properties:
● Resolution ● Accuracy● Frame Rate● Latency
Our techniques allow un-compromised performance coupled with very low network bandwidth.
This slide was presented at Google I/O May 2011. It shows the increase of pixel count as opposed to memory bandwidth as a function of time. It was introduced to motivate use of hardware rendering (OpenGL) as opposed to software rendering (Skia), Guy and Haas argue that the memory bus is just too slow to allow software rendering.
The argument is much more powerful when applied to network bandwidth which is orders of magnitude slower than the internal memory bus.
Here the original slide (the blue rectangle) is updated to current display resolutions. In just a year and a half the number of pixels (e.g. Nexus 10) has increased by a factor of four. Both the internal memory bandwidth and the network bandwidth are only slowly improving. This increase in pixel counts makes remote Android graphics more challenging.
Another change that makes remote Android graphics even more difficult is the 60 frame/sec standard that has been adopted since ICS (Ice Cream Sandwich)..
Normally Android apps are installed locally on the local device. No remote bandwidth is needed to view the graphics. Remote graphics is typically done by exporting pixels at the Framebuffer level. For a 4 Mega pixel device (e.g. Nexus 10) at 60 fps a 1 Gbytes/sec network is needed. Even if a 100x (100 fold reduction of data volume) compression codec is used, a 10 Mbytes (80 Mbits)/sec network is needed.
As shown in the previous slide, the volume of network data needed to export the graphic stream increases as we approach the pixel level. The higher the layer exported, the more compact and efficient the graphical representation.
We use the rendering layer to export graphics. The volume of data needed is approximately 100 times less than the pixel level.
Exporting the app at the toolkit level would undoubtedly be more efficient, but a direct approach will not work. The toolkit is dynamically extensible and there is no way to reference on both the server and client side the same toolkit elements.
The data compression algorithm reduces the volume of data to less than the toolkit level. The rendering stream is scanned for sequences of commands that are reversed engineered into both application and toolkit level routines. These routines are entered into dictionaries shared by both the encoding (server) and decoding (client) ends. Long sequences of rendering commands are sent by simple reference to the dictionary entries from the server to the client.
More details can be found on the URL:
http://www.ascender.com/remote-graphics
There are four natural targets for exporting graphics in the rendering layer in the ICS graphic stack. We tested the first three ¬, , ® by building prototypes. The fourth target, ¯, is technically very similar to ¬.
The right branch of the rendering stack (¬, ) is part of Android since the Honeycomb version. It is usually called Hardware rendering.
The left branch of the rendering stack (®, ¯) was present in the Android graphic stack from its first release. It is usually called Software rendering.
Android allows native OpenGL apps to be written using the NDK. We remotely accessed these applications by using the our remote enabled OpenGL () rendering layer
Android allows native Skia (software rendering) apps to be written using the NDK. We remotely accessed these applications by using our remote enabled Skia (®) rendering layer.
This slide illustrates the systems architecture of the remote server and the local client. We send the graphic rendering from the server to the client in a purely simplex (one-way) connection. Thus, no round trip delays are incurred in the graphics streaming.
User interactions will cause round trip latencies.
This slide illustrates an important feature of our remote graphics system. Since the rendered pixels are not needed on the remote side only the upper part of the rendering interface need be executed on the remote end.
Thus for hardware rendering (OpenGL) the lower level, which is dependent on a hardware GPU, is not needed. This greatly reduces the cost of running the graphic stack on the remote side.
For software rendering (Skia) the lower level, which actually does the computationally intensive pixel rendering, is not needed. This greatly reduces the computational needs on the remote side.
The compression ratio can be understood to be a product of two factors:
1) The rendering layer is about 100 times more efficient for remote graphics than the pixel layer.
2) The compression routines add an additional factor of about 100.
We can thus render remotely at 60 fps with a bandwidth of typically less than 20 Kbytes/sec. With no compromise of
● Resolution ● Accuracy● Frame Rate● Latency
The reason that so many rendering API's are supported relates to coverage. In the context of an Android app store it is the percentage of apps that can be supported via remote rendering. You would like to remotely support a large percentage of unaltered apps as they currently exist in the app store.
The above Venn diagram illustrates the overlapping coverages for each rendering API. For example: to support Java ICS apps, which render to OpenGL ES 2.0., it is sufficient to support the yellow OpenGLRender API (¬). To support a Java Froyo app the green Canvas (¯) or blue Skia (®) API is sufficient. More sophisticated apps might need red OpenGL ES 2.0 API () support.
It is instructive to contrast our approach with the Nvidia Grid or OnLive cloud gaming systems. Both need expensive hardware and use a large amount of network bandwidth.
The enabling technologies that allow for Remote Android Graphics have many uses:
Cloud computing, remote app server
App library, subscription model
App demos
Remote enterprise applications
Set-top boxes
Cloud Gaming