Following my invitation to speak at the WWW@20 celebrations - this is my attempt to squash the interesting bits into a somewhat coherent 15min presentation.
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
www@20 what does the history of the web tell us about its future?
1. What does the history of the web tell us about its future?
Wednesday, March 25, 2009
2. 20 years ago, while at CERN, Tim Berners-Lee
wrote a memo proposing a method of sharing
information
Information Management: A Proposal http://info.cern.ch/Proposal.html
Wednesday, March 25, 2009
4. The following year TimBL got a NeXT Cube and
wrote the first webserver and client
The first webserver : http://www.flickr.com/photos/tascott/3357832896/
Wednesday, March 25, 2009
5. So what did TimBL invent?
Colon Slash Slash http://www.flickr.com/photos/jeffsmallwood/299208539/
Wednesday, March 25, 2009
6. There really is no Web 2.0 – we’re still
implementing the original spec
Dan Brickley & TimBL
Wednesday, March 25, 2009
7. The original web was read-write
The first webserver was read write http://www.flickr.com/photos/tascott/3357826152/
Wednesday, March 25, 2009
8. One web and device independence
Online communities : http://xkcd.com/256/
Wednesday, March 25, 2009
9. People, things, concepts and data.
Sweet, sweet data
How it works: The Web http://flickr.com/photos/danbri/2415237566/
Wednesday, March 25, 2009
10. To understand the future of the web you need to
understand its history – a web of things not a web
of documents
Wednesday, March 25, 2009
11. Tom Scott
derivadow.com
Wednesday, March 25, 2009
Editor's Notes
Following my invitation to speak at the WWW@20 celebrations - this is my attempt to squash the interesting bits into a somewhat coherent 15min presentation.
20 years ago TimBL was working at CERN as a computer scientist, alongside a bunch of physicists. What he noted was that, much like the rest of the world, sharing information between research groups was incredibly difficult.
Everyone had their own document management solution, running on their own flavour of hardware over different protocols.
His solution was a light weight method of linking up existing stuff over IP - a hypertext solution - which he dubbed the Worldwide Web. And described in this memo.
If you’ve not read it you really should.
Then nothing happened for a year or so. Nothing happened for a bunch of reasons:
The ARPANET and then IP was popular in America, in Europe, less so - indeed senior managers at CERN had recently sent out a memo to all department heads reminding them that IP wasn’t a supported protocol – people were being told not to use it!
Also because CERN was full of engineers everyone thought they could build their own solution, do better than what’s already built and improve upon it – no one wanted to play together.
And also of course because CERN was there to do particle physics not information management.
But then TimBL got his hands on a NeXT Cube - officially he was evaluating the machine not building a web server - but, with the support of his manager, that’s what he did.
For those that don’t know NeXT was Steve Jobs company while he was in the wilderness between his two stints at Apple and went on to form the basis of OSX.
There then ensued a period of negotiation to get the idea out freely, for everyone to use - which happened in 1993. This coincided, more or less, with the University of Minnesota's decision to charge a license fee for Gopher.
The web then took off esp in the US where IP was already popular.
The beauty of the proposal was it’s simplicity - it was designed to work on any platform and importantly with existing technology.
They knew that to make it work it had to be easy as possible. He only wanted people to do one thing. That one thing was to give things identifiers - links - URIs. So information could be linked and discovered.
This is then is the key invention - the URL.
To make this work the URL scheme was designed to work with existing protocols. Especially FTP and Gopher. That’s why there’s a colon in the URL - so that URLs can be given for stuff already using other protocols.
He also implemented a quick tactical solution to get things up and running and demonstrate what he was talking about. That was HTML. HTML was just one of a number of supported doctypes – it wasn’t intended to be the doctype. HTML took off because it was easy.
The plan was to implement a mark-up language that worked a bit like the NeXT app builder. But they didn’t get round to it before Mosaic can along with the first browser and then it was all too late. And we’ve been left with something so ugly I doubt even it’s parents love it.
The curious thing, is though is if you read the original memo despite its simplicity it’s clear that we’re still implementing it - we’re still working on the the original spec.
It’s just that we have tended to forget what it said or decided to get sidetracked for a while with some other stuff.
For example the original web was read write.
Not only that but it used style sheets and a WYSIWYG editing interface - no tags, no mark-up. Why would anyone want to edit the raw mark-up.
You can also see that the URL hidden, you get to it via a property dialog.
This is because the whole point of the web is that it provides a level of abstraction, allowing you to forget about the infrastructure, the servers and the routing. You only needed to worry about the document. For those that remember War Games - you will remember that they phoned up the different computers - you needed this networking info (phone number) to access the information, they needed to know its location before they could use it. The beauty of the Web and the URL, is that the location shouldn’t matter to the end user.
URIs are there to provide persistent identifiers across the web - not a function of ownership, branding, look and feel or anything else.
The original team responsible for the Web described the IT ecosystem in which it was developed as a “zoo” - there were so many different bits of hardware and different operating systems out there.
The purpose of the web was to be ubiquitous - to work on any machine - open to everyone. It was designed to work no matter what machine or operating system you are running on the server or the client.
The solution: one identifier, one HTTP URI and defererence that to the appropriate document based on the capacities of that machine.
For example, people often think the original web didn’t support images - it did just not inline images. But since most clients couldn’t deal with images most didn’t display them.
We are, or should be adopting the same approach with Mobiles, IPTV and connected devices - we should have one URI for a resource and allow the client to request the document it needs. As Tim intended.
The technology is there to do this - we’re just not using it.
The original memo also talked about linking: people, documents, things and concepts, and data. But we are only now getting round to building it.
Through technologies such as OpenID and FOAF - so we can give people identifiers on the web and describe their social graph, the relationships between people.
And RDF for data - a way of publishing data that machines can process which describes the nature of and the relationship between the different nodes of data.
The original memo assumed, and the original server supported link typing, so that you can describe not only real word stuff but also the nature of the relationship between those things. You know like RDF and HTML 5 lets you do 20 years later.
This is all a good idea because it lets you treat the web like a giant database. Making computers human literate by linking up bits of data so that the tools, devices and apps connected to the web can do more of the work for you, making it easier to find the things that interest you.
The semantic web project - and in so many ways the original memo - is all about helping people access data in a standard fashion so that we can add another level of abstraction - letting people focus on the things that matter to them rather than the pages and documents about them.
So it seems to me then that to understand the future of the web you first need to understand it’s origins.
Don’t think about HTML documents - think about the things and concepts that matter to people and give each it’s own identifier, it’s own URI.
And then put in place the technology to dereference that URI to the document appropriate to that device. Whether that be a desktop PC, a mobile device, an IPTV or third party app.