The program states that this is a “keynote” talk, but as this is the Hypertext “Unconference” it will be more like an “Unkeynote”. The title of the unkeynote is “Hypertext: Are we still not there yet?” Let’s first do a quick survey: Who knows what this title alludes to? Answer on the next slide.
The subtitle/slogan of ACM Hypertext ‘98 in Pittsburgh was: “Are we here yet?” A bit of a strange formulation as I would read it as “Are we there yet?” Next question: what was the answer to this question? A hint: if the answer was “yes” this would suggest we should stop the ACM Hypertext Conference Series. But that did not happen… OK, so what was the answer really, in 1998 and the years after that? A hint to the new answer: if the answer was “Yes” then we should shift the attention to making hypertext (now ready) into ubiquitous technology, but if the answer was still “No” we should first continue to develop HT technology and principles. OK, what did we answer, and considering the answer was initially “No”, when did it become “Yes”? Answer: we said “No” until HT’2006 and we hit rock bottom in number of attendees at ACM Hypertext. In 2007 we turned things around by starting to discuss making hypertext ubiquitous. We started talking about “Web Science” and we created tracks on hypertext application areas.
Quiz: who invented the concept of Hypertext? If you answered Vannevar Bush, who did actually read the article “As We May Think” from the Atlantic Monthly, from July 1945? If you think “no” is a reasonable answer because you could not access such an old article, think again. Everything is available on-line and this article is no exception! What did Bush really suggest: The human mind … operates by association. The associations are not permanent: they fade if not reused frequently. The selection by association can be mechanized and will be more permanent Interesting that Bush recognized the principle of thinking by association, but considered making the associations more permanent by mechanizing them a good thing. We have since come to realize that “fading” is a good thing. In fact even Google’s Pagerank takes into account that frequent associations make information more important and we are beginning to take the number of times links are followed as a measure for their relevance.
Next thoughts: The essential feature of the memex is that selecting an item will select immediately and automatically another. So this is linking in some way but the links are activated automatically, not manually by clicking on a link anchor. Further in the article Bush does mention the action of “tapping a button” to go from one item to the next. The user is building a trail of many items. There may be annotations, but in essence the user is building a linear story out of fragments gathered from a large storage. Bush though of a new profession: the trail blazers. These people build linear trails out of the mass of the common record. Wait a minute, this is not hypertext! Quiz: How many times does the word “hypertext” appear in Bush’s article? Answer: 0 How many times does the word “web” appear in Bush’s article? Answer: 2, but once it refers to a spider web of wires so that doesn’t count, and the other is about a web of trails carried by the cells of the brain so that’s not about a web of pages or concepts either, so that doesn’t count either…
If Bush did not invent “hypertext”, who did? Most of you probably have a guess, maybe even a good guess. Let’s go over some clues: First clue is not only the text “You think you know what hypertext means.” but also the fact that this was a handwritten presentation.
Second clue: he does not really seem to like the Web, now does he?
OK, so it is Ted Nelson, but what did he mean by it? We already see that he assigns very different roles to “Origins of Content” and “Re-Use of Content”.
Ted Nelson coined the term “hypertext” but the “hyper” did not stand for the linking mechanism, but because he wanted to create a system that would include and connect everything that anyone would ever write. He coined the system Xanadu because Xanado is a magic place of literary memory in a poem Kubla Khan by Samuel Taylor Coleridge. Ted Nelson’s basic ideas are that you can link to information that is stored elsewhere and that is presented separately. You can also transclude information in the current node or page. This is like quoting but the text comes directly from the original source. In order for transclusion to work you need to be able to do fine-grained (byte-level) addressing. It is the person who creates the transclusion who decides what to transclude. It is not up to the author of a page to decide what position a link or transclusion could refer to. There should never be an issue with broken links and a transclusion should always transclude what the author intended. To achieve this there cannot be deletion. There can also be no modification to nodes as this would break where links and transclusions point to. So every update creates a new version and you might be able to refer to a generic document or a specific version.
Hypertext really started working already before 1970, with the Hypertext Editing System and FRESS developed by Andries van Dam at Brown University. Another great mind was Doug Engelbart (inventor of the mouse) who designed hypertext as a way to augment the human mind. He realized NLS around 1968. Hypertext became “real”, meaning it became used for applications outside the hypertext research domain. Despite the limitations of hardware the need for speed was realized. KMS, a successor of ZOG (Mc. Cracken and Akcsyn) was created starting from the assumption that if you could navigate quickly you would not become disoriented. (I don’t believe that is true, actually.) The idea of nodes as “cards” was introduced in NoteCards and later HyperCard. Cards sound simple but in fact in NoteCards there were many types of cards for different purposes and in HyperCard the functionality of the cards could be programmed completely.
Intermedia was without a doubt the most advanced hypertext system of its era. Unfortunately it ran on the A/UX operating system on a Macintosh, which was never very popular. Intermedia was the third working hypertext project from Brown University. It appeared around 1985. Intermedia was not a single program. Different programs for different media would be activated through a special linking protocol. You can compare it a little bit with clicking on an icon in Windows or Linux or MacOS which automatically activates the program for that document type. It is very different from the browser plug-in approach where programs all run inside the browser and a fatal error in a plug-in can kill the browser. In Intermedia links were bidirectional. In Intermedia links were not tied to the documents but to a user. Each user could create a different Web of links over the same set of documents. Programmability went to the extreme in HyperCard. In fact HyperCard was mostly a user interface prototyping tool. It just happened to have a goto function to go from one card to another, thus giving it linking capabilities. Anchors were also just hotspots on the screen and not tied to document parts. So to make a word into a link anchor you had to position a link anchor over it. Hyperties was a precursor to open hypermedia: the links were tied to the anchor terms, not to the source nodes. So in hyperties every occurrence of a term has to lead to the same link destination. This may sound limiting but it is actually a common way to create links anyway.
Here are some illustrations (taken from various sites on the Web) to illustrate how advanced the “old” hypertext systems really were. Top right: The Intermedia web view, showing a graphical overview of the link structure. Bottom right: The menu in Intermedia for creating links. Note that creating the “start” and the “completion” of the link are separate actions. Note also many other options, like the automatic creation of links, creating annotations, etc. Bottom left: A screenshot from HyperCard. This shows stacks of pages, forming the basic navigation method in HyperCard, besides arbitrary links.
Hypertext climaxed in 1990 at a NIST (National Institute for Standards and Technology): all functionality represented in reference models. The image here shows the Dexter Reference Model. Key features of the Dexter Model were: Separation of concerns through layers. Very generic structure of nodes and links. Links could be unidirectional, bidirectional or even consist of series of nodes, of arbitrary length. ACM Hypertext had only just started (HT87 and HT89, followed by the first ECHT conference in 1990) but in fact the answer to the question “Are we there yet?” was very close to YES!
The Dexter model is complex, so we only highlight a few aspects here: The main element in Dexter is called the component. A hypertext node is a component. A link is also a component. There are atomic components, think of them as text fragments. Then there are composite components, like a page consisting of fragments, but also a chapter consisting of sections and sections consisting of pages. Links are sequences of two or more endpoints and can be unidirectional, bidirectional or undirected. Unidirectional links can be followed in only one direction, bidirectional links in both directions, implying that they are about two nodes, and the most complex link structures can be thought up but then have essentially no direction. Undirectional links are used in spatial hypertext for instance. When you follow a link to an abstract concept, say a chapter of a course or book, something needs to happen to display a page. This is a two-stage process: A page needs to be selected. For instance, when linking to a chapter the first section may be selected and because a section is selected the first page of that section may be selected. (But a system may also decide that the first “unvisited” section or page is selected.) When the page is selected it needs to be constructed out of its fragments in order to display something to the user. We will come back to this two-stage process later.
And then, disaster struck. We all know what that was? The Web was invented. Was this disaster or was this a great thing? Let’s review the main properties of the early Web (next slide).
Tim Berners Lee invented the Web as a tool for high-energy particle physicists to share documents over the Internet. Through this Web you could not only retrieve but also edit, update documents to collaborate with each other. The Web ran on the NeXT computer, which was not very popular. Two years later Marc Andreessen from NCSA created the Mosaic for X browser, bringing the Web to the popular Unix operating system with X-Windows. Mosaic made the Web read-only and since Mosaic quickly became the dominant browser the Web became read-only. But the Web through out much more that existed in hypertext systems and models. Key limiting properties of this early Web are: Links are no longer rich objects, with many anchor points, possibly bidirectional, etc. Links now are simple from-to links from one node to one other node; Links are embedded in the source page, and tied to the source anchor; You cannot add links to pages. The author has decided where the links are, and links can only link to destinations within a page if the author of that page has created destination anchors; Everything on the end-user’s side runs within the browser. Plug-ins have been added for different data types. Bad plugins can crash the browser; Content is served by servers. Initially all content was static, later the CGI (common gateway interface) made some server-side generation of content possible; Links can break because the link destination can be deleted; Very strangely the Web has transclusion of images which are shown within the page, but initially the Web had no transclusion of text. (Later the “object” tag was added to make transclusion of text possible.)
OK, let’s investigate why the Web was successful, where all better, more sophisticated, more powerful, flexible systems from before 1990 never caught on… Simplicity was the first key. To publish on the Web you needed to set up a server, which for Unix users was very easy (with the CERN server). You needed to write pages, and you only needed your own standard text editor. HTML was incredibly simple. It was also very limited, but you could do very basic formatting and create links to any page anywhere on the Web. Around the same time as the Web the University of Graz introduced Hyper-G, a much better system, with links as objects, but it did not become a success, partly because of too complicated. The next key is availability. This is where Marc Andreessen and Eric Bina from NCSA played a critical role. The original Web was not really “available” because very few people had a NeXT comupter, needed to run the “browser” (which was a read-write interface). The introduction of NCSA Mosaic for X meant that everyone with Unix and X-Windows could use it. This included a large part of the scientific cummunity. And both the browser and server were available in source code, for free. The period 1987-1992 was a period in which people started to move away from terminals connected to large computers and towards PCs and workstations. On PCs and Sun and other workstations Unix was the operating system used, with X-Windows as graphical interface. A browser was needed that would work with Unix and X-Windows. For the Web such a browser became available at the right time, and at the right price (free). Internet was becoming available in more and more universities, exactly what was needed for setting up a Web infrastructure. The client-server architecture was fitting for an environment where “file servers” were commonly available in computing centers, to off-load heavy tasks from the less powerful workstations.
The advent of the Web threw us back to pre-1960 in the sense that we lost all the ideas, functionality, features and applications in hypertext that were ever developed. Wonderful powerful systems with rich linking paradigms, computed content, nice visualizations were suddenly replaced with a very primitive system with a simple passive browser, no authoring tools, read-only and with primitive directional links embedded in HTML pages using fixed anchors. How could we ever recover from this disaster? There are two possibilities: “This is bad, let’s build a better alternative.” “This is bad, let’s make it better.” The first option was quickly doomed because the Web took off so quickly that people no longer accepted any alternative. Hyper-G, later called HyperWave, was doomed and could only survive by making it become a Web architecture. Microcosm (open hypermedia from the university of Southampton) again had to replaced by a layer between server and browser in order to keep open hypermedia alive. So the phrase “If you can’t beat them, join them.” is very applicable to the field of hypertext. And the question “Are we there yet?” could be translated into “Have we integrated everything from 1960-1990 into the Web yet?” A lot of hypertext research has been to bring old ideas to the Web. This is often more a matter of “engineering” than of “science” (as the ideas are old) but the main difference is that we now actually try to use the ideas in practice, and we evaluate that use as well.
We are “getting there” by building extensions “around” the browser and server that we must take for granted. Furthermore, everything we do on the client/browser side must be so automatic that it does not require any skills from the end-user. The browser offers two things: A canvas for the extension to draw in, and possibly also to receive input events from the user; A network interface, so that the extension need not worry about how the browser communicates with the server. For generating or adapting the information that is sent to the browser there are three possibilities: The information coming from the server can be adapted “on the fly” in a proxy server. This is most useful when you don’t have control over the server. You could for instance add links on the fly, from a link database, perhaps taking a user model into account. So “open hypermedia” and “adaptive hypermedia” are possible through a proxy server. On the server itself you can select, generate or manipulate the content using scripts or servlets. This is the most common way to create advanced Web applications. The information from which Web pages are generated may be extracted from a database. This is what all Web shops do, but also sites like Wikipedia, discussion forums, etc. What’s the result: the Web is merely a “shell” in which we can start rebuilding any type of application we like. The only thing the shell really provides is the HTTP protocol (and that actually isn’t so great either…
As an example of where the Web is going we have a look at the GRAPPLE project and the GALE GRAPPLE Adaptive Learning Environment. You don’t have to understand this picture, except to notice that it does not look at all like something that would be a Web application. It has a global Event Bus to which all components are connected. The event bus is asynchronous meaning that when a component sends a request to the bus it should not wait/hang until there is a response. This is very different from the typical Web communication using HTTP where a client sends a request and then waits for an answer. Still, this is actually a completely Web-based application. The communication is using HTTP. And many components also communicate with an end-user through a normal Web browser. We will look at just one component here: GALE, which is an adaptive Web server.
When we look at the adaptation engine GALE we get a good impression of what Web applications look like. We see the GALE servlet which is the connection to the (Tomcat) web server and handles the incoming requests and arranges for a response. What the figure illustrates nicely is that in modern Web applications 99% of the functionality happens on the server side behind the scenes. That requests come in through HTTP and that a response to a browser also goes through HTTP is just a minor detail. This brings us to a really important point: Web Applications are not by definition hypertext! They are only hypertext if all that stuff happening behind the scenes results in hypertext.
GALE has a structure that shows similarities with the Dexter Model: Instead of “components” we have a DM with concepts and relations. The concepts may have several resources: they can be composite. And by identifying concepts and resources by URIs concepts can contain other concepts as well as resources. We have a User Model because our system is adaptive. Dexter did not foresee adaptive applications to the fullest. We do have the page selection: when you access a concept there is a selection process to select a resource. But what this does is select a URI, which may again refer to a concept. So there is concept structure traversal until you reach a page (resource). And we do have page construction: pages consist of elements that are conditionally included. Not only images but also pieces of text may be transcluded so that the original is never copied, not even for a quotation. This isn’t yet as powerful as Xanadu: you can only include whole objects at this time, not an arbitrary piece that is taken at run-time and not pre-authored. But that’s a small addition.
GALE either makes any arbitrary desired processing on the server side possible or can be extended to do so. GALE is configured using Spring (www.springsource.org) to configure how it works. Many things can be changed including all of the following: The processing is done by a pipeline of processors. They can all be replaced by different ones if desired. New ones can also be inserted in the pipeline. For instance the LoadProcessor loads resources from files or through http requests. However, you could create a new LoadProcessor which can also laod resources from a database using some query language. The LayoutProcessor generates layout using HTML tables. But there is also a CSSLayoutProcessor that uses CSS with absolute positioning and there is a FrameLayoutProcessor that uses iframes to create the layout. The XMLProcessor coordinates the adaptation to XML (and XHTML). It uses modules, one per tag to do adaptation. You can add modules, replace modules, as desired. The &lt;if&gt; tag is used for conditional fragments; The &lt;object&gt; tag is used for conditionally including a resource. The inclusion is in-line, so this is really a form of transclusion. The &lt;a&gt; tag is used for link adaptation. This is highly configurable: under arbitrary conditions you can assign an arbitrary class to a link anchor and a style sheet then determines the presentation of these links. You can also adaptively place icons in front of a link or behind the link anchor. The &lt;for&gt; tag is used for generating repetitive elements, like a fragment for each child concept in a concept hierarchy. The &lt;variable&gt; tag is used for inserting information taken from the DM or UM. You can give a concept a “title” property and then insert it in the page using the &lt;variable&gt; tag. The &lt;view&gt; tag is used to insert a completely generated bit, like a menu based on the concept hierarchy. The &lt;test&gt; tag generates (and evaluates) a multiple choice test. This is the only tag that is specific for a “learning environment”.
Here is a page of an example GALE application: the Milkyway application, created by Eva Ploum. On the page you see nothing that is specific for the planet “Jupiter” is written in the page. The page is a template for all planets. So what you see is taken from the DM or is transcluded. What we see here: On the left is an accordion menu over the concept hierarchy. Links are annotated with colors and icons. We see a header listing number of pages (read and still to do), and links to lists of pages which are created using &lt;for&gt; constructs. We see the title “Jupiter” taken from the DM property “title”. We see that Jupiter is a “Planet” of “Sun”. These words (and links) are generated based on DM relations. We see an image, of which the URI is in the DM. We see an information paragraph which is a transcluded &lt;object&gt;. We see a list of moons, generated using a &lt;for&gt; tag running over the “isMoonOf” relation. We see a “Next&gt;&gt;” link which is actually the Next &lt;view&gt; which generates a link to the next concept in the concept hierarchy (or to the next unvisited concept if you want). We also see the number of visits to the page, which is generated using a &lt;variable&gt; tag. This isn’t quite Open Hypermedia yet as the template page still contains the link anchors. There is no link database. But this application shows that GALE is beginning to close the gap between writing hypertext and engineering a Web application.
So are we there yet? Yes for the most part we are there. We can do on and with the Web everything we could do before 1990 (finally!). Time to look into hypertext applications more closely. The Web has made hypertext applications possible that people were only dreaming about before 1990. We can now realize these applications and we can study their actual use by large numbers of users. But we should remain careful not to confuse “hypertext” with “Web application”. If you come up with a really neat Web application that actually does not involve any real hypertext, don’t be surprised to see it rejected at ACM Hypertext! And while we explore what can be done now it’s time to look ahead and come up with new great hypertext ideas. Since the advent of the Web we have seen some great ones: The Semantic Web (Tim Berners Lee) or the Web of data which machines can reason about, and the web of things: machines controlled over the Web, large and small; We see completely new approaches to accessing information. ZigZag (Ted Nelson) completely changes how we browse, through a multi-dimensional interface. This idea was already used in the movie Minority Report (a must see for hypertext researchers). We see mobile applications that people before 1990 could not even dream of. We have small mobile devices that know where they are, that are connected to the network, that can be used to read, write, talk and listen. What will *you* come up with? Essentially, if links are a key concept in what you create, it has a place in ACM Hypertext!
Paul de Bra's UnKeynote at Web Art Science London
Are we still not there yet?
a hypertext “unkeynote”
Prof. dr. Paul De Bra
Eindhoven University of Technology
Hypertext, are we still not there yet?
• At HT’98 we asked “Are we here yet?”
• What was the answer, and why?
• if “yes”
then stop HT
else continue HT
• What was the answer really (each year), and why?
• if “yes”
then start making hypertext ubiquitous
else continue to further develop hypertext
• In fact we kept saying “No” until HT’2006
• Once we said “Yes” HT started doing well (again)
Let’s step back into history…
• Vannevar Bush inventor of hypertext?
“As We May Think” (Atlantic Monthly, 1945):
But what did he really suggest:
• The human mind … operates by association.
With one item in its grasp, it snaps instantly to the next that is
suggested by the association of thoughts, in accordance with
some intricate web of trails carried by the cells of the brain. It
has other characteristics, of course; trails that are not
frequently followed are prone to fade, items are not fully
permanent, memory is transitory.
• Selection by association, rather than indexing, may yet be
mechanized…. it should be possible to beat the mind
decisively in regard to the permanence and clarity of the items
resurrected from storage.
Let’s step back into history…
• What did Bush really suggest (cont.):
• If affords an immediate step, however, to associative indexing,
the basic idea of which is a provision whereby any item may be
caused at will to select immediately and automatically another.
This is the essential feature of the memex. The process of
tying two items together is the important thing.
• …he runs through an encyclopedia, finds an interesting but
sketchy article, leaves it projected. Next, in a history, he finds
another pertinent item, and ties the two together. Thus he
goes, building a trail of many items. Occasionally, he inserts a
comment of his own, either linking it into the main trail or
joining it by a side trail to a particular item…
• There is a new profession of trail blazers, those who find
delight in the task of establishing useful trails through the
enormous mass of the common record.
If Bush did not invent “hypertext”, who did?
• clue 1: who wrote
If Bush did not invent “hypertext”, who did?
• clue 2: who wrote
Some Basic Hypertext Ideas
• Hypertext connects all texts
• Xanadu = magic place of literary memory
• Copying (even quoting) is evil: why quote when you
can show the original in-line?
• cross-reference links are a referral to information
elsewhere; (following the link takes you elsewhere)
• transclusions look like quotes but the original source is
shown in-line; it’s not a copy
(note: transclusion requires fine-grained addressing)
• (to avoid broken links) you can never really delete
anything; you can only create new versions
Quick look at hypertext between 1960 and 1990
• Working hypertext:
• Hypertext Editing System (1967), FRESS (1968)
• NLS, the oN-Line System (1968), successor of the
Augment project to augment the human mind
• ZOG, first hypertext system with “real” application
(1972 - 1982), used on the USS Carl Vinson
• Performance through simplicity:
• KMS let you navigate quickly (hoping to reduce
• Nodes became “cards” in NoteCards (over 50 types),
later also in HyperCard (with programmed behavior)
Quick look at hypertext between 1960 and 1990
• More and more functionality: Intermedia
• linking protocol to integrate different applications (for
different media); this is like “mash-ups”
• bidirectional links
• links not hardwired to nodes, users can create their
own web of links
• Complete programmability: HyperCard
• link is just the “goto” statement of a programming
language; link anchors are independent of content
• Link databases: Hyperties
• database of anchor-destination pairs
• open hypermedia takes this idea further
• Intermedia web view
• Intermedia create link
(and other options)
• HyperCard stacks
So hypertext systems became sophisticated!
• All functionality of all systems represented in a model:
the Dexter Model (NIST Workshop, 1990)
Essential Dexter elements/properties
• Components: atom, link, composite component
• atom is an atomic fragment
− a page is a composite element (consists of atoms)
− an abstract composite component consists of other
(smaller) components, either abstract or atoms
(a composite with atoms is a “page”)
• link: sequence of two or more endpoints (unidirectional,
bidirectional or even undirected)
• Page selector: when a link destination is abstract
(composite) a page must be selected to be displayed
• Page constructor: after a page is selected the
presentation must be constructed from the atoms
The (early) Web
• Tim Berners Lee (1989/1990): the Web as an aid for
physicists for sharing documents
• Marc Andreessen (1992): the Mosaic browser made
the Web read-only
• Key properties/limitations in the basic Web:
• uni-directional links between single nodes
• links are not objects (have no properties of their own)
• links are hardwired to their source anchor
• only pre-authored link destinations are possible
• monolithic browser
• static content, limited dynamic content through CGI
• links can break
• no transclusion of text, only of images
So why was this primitive Web successful?
• Especially publishing was very simple (HTML)
• “Everyone” could get it and use it (and it was free)
• The Web became available when Unix and X-Windows
became popular, and when Internet became available
1. (Pure) Client-Server Architecture
• This was fitting for the typical computing
infrastructure with powerful file servers and less
Are we there yet? Take two
• 1990 (start of the Web) threw us back to pre-1960
• Two approaches to recover:
• “This is bad, let’s build a better alternative.”
• “This is bad, let’s make it better.”
• Since 1990 we are working on the second approach:
“If you can’t beat them, join them.”
• “Are we there yet?” = “Have we integrated
everything from 1960-1990 into the Web yet?”
• but also: “Are we using everything from 1960-1990
on the Web yet?”
So how do we get “there”?
• Take the Web browser and server for granted!
Build extensions into this architecture
• browser plugins
• browser applets
• proxy services
• server side scripts
• database back-end
• extend user interface,
• browser offers network int.
• change content “on the fly”
• select or compute content
• better storage
Example: GRAPPLE / GALE
• Overall GRAPPLE Infrastructure:
Things that make GALE into “real” hypertext
• Domain Model (DM) with concepts and relations
• for each concept there may be several resources
• concepts and resources identified by URI
• User Model (UM) with for each concept certain
• Links always refer to concepts:
• page selection: concept access involves (recursive)
• page construction: page may contain
− conditionally included fragments
− conditionally included objects (object transclusion)
Things that make GALE able to get “there”
• Spring configuration lets you change most things:
• Processing is done by a pipeline of processors
(LayoutProc., Loadproc., XMLProcessor, …)
• XML adaptation is done by modules (one per tag)
− <if> for conditional fragments
− <object> for transclusion of objects (concepts)
− <a> for adaptive links
− <for> for generating a list of elements
− <variable> for selecting DM or UM info
− <view> for arbitrary generated views over DM
− <test> for multiple choice tests (specific for e-learning)
Are we there yet?
• Yes (for the most part) we are there. So…
• Time to look into hypertext applications more closely
• Careful not to confuse “hypertext” with “web
• Time to look for new hypertext concepts that go beyond
the state of the art from 1990.
Visionaries have already done so:
− Semantic Web (web of data, leading to web of things)
− Multi-dimensional structures/browsing: ZigZag
− Mobile applications, combining on-line location-
aware communication and information sources
− … what *you* will come up with!