The canvas has been invented by Apple to create features like coverflow in iTunes.
A comprehensive compatibility chart of supported canvas features in various browsers.
A canvas consists of the canvas HTML5 element. Apart from global attributes like “id” the only HTML attributes of a canvas are “width” and “height”. The canvas element may contain other code as a fallback for browsers that don’t support the canvas tag, similar to an object element.
Creating a wet floor effect with canvas is extremely powerful as you can pull-in photos from flickr and add effects on the fly, without Photoshop.
Importing an image into a canvas: first copy the image to the canvas. When you’re done save the current state to return to it later.
Drawing the mirror image: Restore the saved state, flip the image vertically by using the scale() method, move it to the bottom of the original with the translate() method, the draw the mirror image.
Drawing the gradient for the wet floor fading effect: restore the original image, flip it vertically, create a gradient with an RGBa opacity of 50-100%, fill a rectangle with the gradient.
The transform() method combines the other three (scale, rotate, translate) and can be used to slant an image, thus creating a pseudo perspective effect.
Another example of using transform (in CSS3) to create a 3D effect of a cube. Note that even HTML buttons can be mapped on such surfaces.
Plotr uses canvas transformations to create 3D graphs, showing the numeric values on hover. Video: http://www.flickr.com/photos/martin-kliehm/3668738661/in/set-72157620689437384/
Myles Eftos presented his font rendering at Web Jam 2007 in Australia. He wrote a script to translate SVG into canvas so that he could use the SVG paths defined in a True Type Font to render text. Video: http://www.flickr.com/photos/martin-kliehm/3669711754/in/set-72157620689437384/
Native canvas text support was added in Firefox 3.
Cufón does the same as Myles’ script, translating fonts into canvas. Of course the same accessibility issues arise with cufón as in sIFR regarding scalability and the inability of a user style sheet to overwrite the colour and background colours of the generated text.
Currently the biggest issue with canvas is it’s lack of accessibility. A canvas is just a flat bitmap without any DOM structure which is faster as you’ll see later, but at the same time this lack of a structure makes it inaccessible for assistive technologies.
The canvas element is invisible to the MSAA. Event the nested fallback image does not appear anymore. Video: http://www.flickr.com/photos/martin-kliehm/3668917051/in/set-72157620689437384/
At the same time canvas is a powerful tool for enhancing accessibility. We have known filters for ages from Internet Explorer, now the same functionality is available in canvas. Pixel by pixel can be changed to another colour. Enhancing the colour contrast is a possible application, simulating colour blindness, or using filters with edge detection algorithms to identify objects in an image.
You have seen earlier that SVG paths can be translated to canvas paths. Now openstreetmap.org offers an export function to SVG, and I believe Yahoo! Has made it’s paths public recently (although I can’t find the URL any more). The proof of concept by Ernest Delgado is slightly related although he doesn’t render the maps in canvas but imports the slices from openstreetmap.org
Canvas is faster for rendering objects than SVG, and it has been built into Google Maps in November 2008. Video: http://www.flickr.com/photos/martin-kliehm/3669738142/in/set-72157620689437384/
A parallelogram is not equal to a trapezium – a slanted image is not the same as a perspective correct 3D image. Since 3D isn’t natively built in into canvas at the moment, people search for workarounds.
A solution has been proposed by Ernest Delgado on the YUI blog: slice an image into 1px wide sections.
Since a canvas works with copies of an image as an array of references to the original instance instead of creating hundreds of new images, slicing can be done without extra load.
There is a problem with the slices: it is a rough technique that results in little steps at the edges.
What we need is something like anti-aliasing.
This is called subpixel accuracy. Dividing a pixel into smaller subpixels will enhance the smoothness. There are various algorithms to achieve this, and game developers have come up with a more performant bit-shifting technique for inverse square foots to avoid expensive division. Note the jittering on the lower left animation compared to the smooth rendering of the others. Video: http://www.flickr.com/photos/martin-kliehm/3669901222/in/set-72157620689437384/
A faster solution for texture mapping is subdivision into triangles. In affine mapping the triangles are just slanted, a technique used in early games like Doom that restricted the world to vertical walls and horizontal walls and ceilings. However, perspective correct rendering re-calculates the position of every pixel.
Another accessibility problem in 3D worlds like games or Second Live is providing a structure for objects. In game development this is called a “scenegraph”: a nested list of objects and their child objects. Thus a room in Second Live could contain a list of persons who are in that room. A shopping mall in the future of 3D internet could be represented as a nested list of shops on different “floors”. I would leave it to the operating system to enable blind people with information about the current position and proximity of objects, just as the iPhone 3G S does.
Rotating planes pulling images from the Lost Boys photo group in flickr with applied triangle subdivision. An accessible structure would contain a nested list of three planes each having two sides with nine images. Video: http://www.flickr.com/photos/martin-kliehm/3669829424/in/set-72157620689437384/
Adaptive triangle subdivision takes the amount of distortion into account and further subdivides triangles into smaller triangles if necessary. Video: http://www.flickr.com/photos/martin-kliehm/3669837798/in/set-72157620689437384/
Opera has built a 3D canvas model into a special version of the browser ... Video: http://www.flickr.com/photos/martin-kliehm/3669836864/in/set-72157620689437384/
... so is Mozilla ...
... and Google is working on something similar. They cooperate in the Khronos Group for “3D acceleration on the web”. So it appears that the future of the web could be 3D. We must take every possible precaution to keep it accessible. Video: http://www.youtube.com/watch?v=uofWfXOzX-g
HTML5 video has been added as an input format for canvas. Real-time pixel filtering in videos like setting the RGBa alpha channel of green pixels to become transparent is possible now as it has been before for still images. Video: http://www.flickr.com/photos/martin-kliehm/3669844662/in/set-72157620689437384/
This is an example in SVG, but colour manipulation and edge detection is also possible in a canvas video now. Video: http://www.flickr.com/photos/martin-kliehm/3669889522/in/set-72157620689437384/
With edge and object detection new intuitive interfaces could be created allowing direct object manipulation. (This example is merely to show the possibilities, it hasn’t been created in a canvas – yet.) Video: http://www.youtube.com/watch?v=ib_g7F6WKAA
Although “face gestures” were an April Fool’s Day joke from Opera, with edge detection filtering face recognition controls would become possible. Video: http://www.youtube.com/watch?v=kkNxbyp6thM
So whatever the future will be, it will be exciting!