Making computers human literate WWW@20
Upcoming SlideShare
Loading in...5
×
 

Making computers human literate WWW@20

on

  • 3,111 views

As part of the 20th anniversary at CERN (13th March 2009) I was invited to speak on the "future of the web" panel.

As part of the 20th anniversary at CERN (13th March 2009) I was invited to speak on the "future of the web" panel.

Statistics

Views

Total Views
3,111
Views on SlideShare
2,261
Embed Views
850

Actions

Likes
1
Downloads
49
Comments
0

4 Embeds 850

http://derivadow.com 846
http://www.linkedin.com 2
http://feeds.feedburner.com 1
url_unknown 1

Accessibility

Upload Details

Uploaded via as Apple Keynote

Usage Rights

CC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • <br /> <br />
  • Stephen Fry recently notes that the challenge is not to make humans computer literate, but to make computers human literate. <br /> <br /> <br /> <br /> And when one considers the revolution we've seen over the last 20 years I think we are making great progress towards that goal. Access to information has become democratized in a way never seen before. <br /> <br /> <br /> <br /> So what for the next 20 years? Well obviously I don't know, but I will steal an idea from William Gibson and suggest the future is here it's just not evenly distributed yet. <br />
  • I work for the BBC and it’s a big place - we produce and publish an amazing volume and diversity of content and I would like to suggest that in someways it represents a microcosm of the wider web. <br /> <br /> <br /> <br /> We produce so much content that I suspect that a traditional central design and build would never work. It wouldn’t work from a UX POV, nor from a coordination and governance POV. <br /> <br /> <br /> <br /> You simply couldn’t sit down, gather requirements and build a left hand nav style website. The coordination effort alone would kill you. <br /> <br /> <br /> <br /> As a result the BBC has historically created a series of microsite. Each coherent in their own right but not across the breadth of BBC content. <br /> <br /> <br /> <br /> Consider for example I can navigate around a Radio 4 site about the opening of the LHC... but... <br />
  • I can’t carry on my journey to find everything to BBC knows about Brian Cox... but it’s nothing personal to Brian. You can’t... <br />
  • find everything the BBC knows about lions or any other species... <br />
  • or even one of our presenters, like Jeremy Clarkson. <br />
  • But things are changing... <br /> <br /> <br /> <br /> It has been my honour to work on a few of projects where we took a different approach. Starting with the data and how people think about it rather than starting with the web page or worse a photoshop document. <br /> <br /> <br /> <br /> And when I say data I really mean starting with understanding what concepts and things people care about and giving each of those things a URI. <br /> <br /> <br /> <br /> /programmes - ensures every programme the BBC broadcasts has a web presence, has a URI. And that that URI can be dereferenced to return an HTML document, an RDF document, JSON, iCal or mobile views. <br /> <br /> <br /> <br /> /music - (currently in beta) is built with MusicBrainz and gives us a page for every artist the BBC plays and in due course it will give us a page per track. And the plan is then to integrate this with /programmes so that from an episode page at /programmes you can click on an artist in a tracklisting and find out more about that artist, including other programmes that have played that artist - hopefully introducing people to new music and new programmes. <br /> <br /> <br /> <br /> And because it’s built with Musicbrainz and integrated with DBpedia not only do we get a URI per artist we also get links into the rest of the web and lovely webscale identifiers to make it easier for others to integrate. <br /> <br /> <br /> <br /> I’m now working on a new project, BBC Earth, which is seeking to bring the BBC’s Natural History content online in a similar fashion. A page per species, habitat, behaviour and adaptation that the BBC cares about - all linked to the programme space and the wider web through DBpedia. <br /> <br /> <br /> <br /> And of course as with programmes and music the API is the website - the URIs can return RDF, JSON etc. as well as HTML. <br />
  • Of course what I’m talking about is Linked Data... even if we didn’t quite realise that when we started. <br /> <br /> <br /> <br /> But the idea that we should care about our URIs, care about having one URI per concept, care about having machine representations for those resources instead of a separate API has helped us build a coherent, scalable, sane service. One that we hope one that is a bit more human literate. <br /> <br /> <br /> <br /> The semantic web project has helped the BBC to start to move away from caring about the document and towards the ideas, concepts and things we as people care about. <br /> <br /> <br /> <br /> So you can find all things Brian Cox, Lion or Jeremy Clarkson. <br />
  • It is my hope that the future of the web is human literate and my belief that the way of achieving this is by following the principles of Linked Open Data. <br /> <br /> <br /> <br /> HTTP URIs for concepts and things that make sense to people, linked to related things and dereferencable to the appropriate document. <br /> <br /> <br /> <br /> I say it is my hope because it has been my experience at the BBC that this approach scales in a way no other can in delivering coherent usable services - human literate services. <br /> <br /> <br /> <br /> <br /> <br />
  • Many thanks. <br />

Making computers human literate WWW@20 Making computers human literate WWW@20 Presentation Transcript

  • Making computers human literate Tom Scott Sunday, March 15, 2009
  • The challenge is not to make humans computer literate, but computers human literate. Liverpool Street station crowd blur http://www.flickr.com/photos/victoriapeckham/164175205/ Sunday, March 15, 2009
  • The BBC has historically created a series of microsites – each coherent in their own right but not across the breadth of BBC content. Radio 4 Big Bang http://www.bbc.co.uk/radio4/bigbang/ Sunday, March 15, 2009
  • I can’t carry on my journey to find everything Brian Cox... Sunday, March 15, 2009
  • ...everything Lions... Sunday, March 15, 2009
  • ...or Jeremy Clarkson. Sunday, March 15, 2009
  • Things are changing : URIs, data and things instead of webpages and photoshop. Sunday, March 15, 2009
  • We’re talking Linked Data. Linked Data cloud diagram http://www4.wiwiss.fu-berlin.de/bizer/pub/lod-datasets_2009-03-05_colored.png Sunday, March 15, 2009
  • Adopting LOD principles makes sense because you create coherent usable services – human literate services. Sunday, March 15, 2009
  • Tom Scott derivadow.com Colon Slash Slash http://www.flickr.com/photos/jeffsmallwood/299208539/ Sunday, March 15, 2009