The document discusses the concept of serendipity and how to increase serendipitous discoveries through database and system design. It defines serendipity as the occurrence of beneficial discoveries by chance and describes three steps to encourage serendipity: 1) remove isolation by increasing connections across semantic and contextual boundaries, 2) allow information to traverse multiple hops, and 3) weight and filter information based on relevance and user feedback. Graph databases are said to better support serendipity compared to relational databases by more easily facilitating these three steps.
A semantic web is a relativity modern technology coined by Sir Tim Berners-Lee in 2001. Web 2.0 is readable by humans. We have HTML 5 and CSS and it does a great job of allowing information to be read by humans. Where web 2.0 fails is supporting machine reading. This then brings up web 3.0. Being able to support data is great, but often what we are most interested in is not the data itself, but the relationships between and among data. Think about how hard it is currently to get all water features. Those features are often in different services and provided by different organizations. I want to quickly and easily get all water features nationally. This is where a semantic web would be very useful because one can store the relationships between data to give you all water features. This talk will show you some of the advantages of a semantic web and how it can be used to answer questions that one would struggle to answer without it.
An updated "what is happening on the Semantic Web" presentation for 2010 - includes business use, government use, and some speculation on the current areas of excitement and development. A very accessible talk, not aimed solely at a technical audience.
Advanced Search Grammar Tool for locating non functional coding sequences in ...Startup
Advanced Search and Flexible grammar tool for biologists to locate non functional coding sequence - cis regulatory modules in a genome along with the display of annotation
Complex hierarchical relationships between entities can only be mapped with difficulty in a relational database and demanding queries are usually quite slow.
Graph databases are optimized for exactly these kinds of relationships and can provide high-performance results even with huge amounts of data. Moreover, not only the entities that are stored in the database, have attributes, but also their relationships. Queries can look at entities as well as their relationships.
Get to know the basics of graph databases, using Neo4j as an example, and see how it is used C# projects.
A semantic web is a relativity modern technology coined by Sir Tim Berners-Lee in 2001. Web 2.0 is readable by humans. We have HTML 5 and CSS and it does a great job of allowing information to be read by humans. Where web 2.0 fails is supporting machine reading. This then brings up web 3.0. Being able to support data is great, but often what we are most interested in is not the data itself, but the relationships between and among data. Think about how hard it is currently to get all water features. Those features are often in different services and provided by different organizations. I want to quickly and easily get all water features nationally. This is where a semantic web would be very useful because one can store the relationships between data to give you all water features. This talk will show you some of the advantages of a semantic web and how it can be used to answer questions that one would struggle to answer without it.
An updated "what is happening on the Semantic Web" presentation for 2010 - includes business use, government use, and some speculation on the current areas of excitement and development. A very accessible talk, not aimed solely at a technical audience.
Advanced Search Grammar Tool for locating non functional coding sequences in ...Startup
Advanced Search and Flexible grammar tool for biologists to locate non functional coding sequence - cis regulatory modules in a genome along with the display of annotation
Complex hierarchical relationships between entities can only be mapped with difficulty in a relational database and demanding queries are usually quite slow.
Graph databases are optimized for exactly these kinds of relationships and can provide high-performance results even with huge amounts of data. Moreover, not only the entities that are stored in the database, have attributes, but also their relationships. Queries can look at entities as well as their relationships.
Get to know the basics of graph databases, using Neo4j as an example, and see how it is used C# projects.
If information stewards and custodians are to collect, create, appraise, preserve, store, use and access sophisticated, flexible, responsive and future- friendly content at scale, then they will have to think strategically about who's going to use the content, how and where they're going to consume it. COPE – Create Once, Publish Everywhere - is an acronym that describes how content should be conceived once and then disseminated through multiple conduits. The goal of COPE is to capture all content (text, media), context and metadata in a single manner, and then ensure that this content can be accessed and used across a range of publishing platforms.
More than ever, we need to learn how to harness the power of networks to tackle the complex issues we're facing as a society. Here's a quick guide to the basics of social network analysis.
Interested? Sign up at http://kumu.io
About the Webinar
The library and cultural institution communities have generally accepted the vision of moving to a Linked Data environment that will align and integrate their resources with those of the greater Semantic Web. But moving from vision to implementation is not easy or well-understood. A number of institutions have begun the needed infrastructure and tools development with pilot projects to provide structured data in support of discovery and navigation services for their collections and resources.
Join NISO for this webinar where speakers will highlight actual Linked Data projects within their institutions—from envisioning the model to implementation and lessons learned—and present their thoughts on how linked data benefits research, scholarly communications, and publishing.
Speakers:
Jon Voss - Strategic Partnerships Director, We Are What We Do
LODLAM + Historypin: A Collaborative Global Community
Matt Miller - Front End Developer, NYPL Labs at the New York Public Library
The Linked Jazz Project: Revealing the Relationships of the Jazz Community
Cory Lampert - Head, Digital Collections , UNLV University Libraries
Silvia Southwick - Digital Collections Metadata Librarian, UNLV University Libraries
Linked Data Demystified: The UNLV Linked Data Project
In search of lost knowledge: joining the dots with Linked Datajonblower
These slides are from my seminar to the University of Reading Department of Meteorology, November 2013. They contain a (hopefully not very technical) introduction to the concepts of Linked Data and how we are applying them in the CHARMe project (http://www.charme.org.uk). In CHARMe we are using Open Annotation to connect users of climate data with community-generated "commentary information" that helps them to understand a dataset's strengths and weaknesses.
The slide notes contain some helpful context, so you might like to download the PPT file!
The slides are licensed as "Creative Commons Attribution 3.0", meaning that you can do what you like with these slides provided that you credit the University of Reading for their creation. See http://creativecommons.org/licenses/by/3.0/.
Opening up and linking data is becoming a priority for many data producers because of institutional requirements, or to consume data in newer applications, or simply to keep pace with current development. Since 2014, this priority has gaining momentum with the Global Open Data in Agriculture and Nutrition initiative (GODAN). However, typical small and medium-size institutions have to deal with constrained resources, which often hamper their possibilities for making their data publicly available. This webinar will be of interest to any institution seeking ways to publish and curate data in the Linked Data World.
Research into Practice case study 2: Library linked data implementations an...Hazel Hall
The research underlying this presentation explored the role that libraries play in the linked data context. Focusing on European national libraries and Scottish libraries, multiple data gathering methods and constant comparative analysis were applied in the study. Amongst the findings, a general lack of awareness within the library community of the Semantic Web and the implications of linked data was identified. At the same time, there is recognition that linked data augments the discoverability and enhances the interoperability of library data. The presentation will include recommendations for the application of the findings of this research in practice.
Brief overview of open data, big data and sharing data ; discussion followed (based on Alastair Croll's presentation at ALA). robin fay @georgiawebgurl ; peter murray (lyrasis)
5 things cucumber is bad at by Richard LawrenceSkills Matter
This talk will look at 5 things Cucumber’s bad at, why that’s a good thing, and what it tells us about Cucumber’s sweet spot in a team’s toolkit.
Many times, when people complain about something Cucumber’s not good at, they’re unwittingly describing something Cucumber shouldn't be good at. They’re revealing that they don’t quite understand BDD and Cucumber’s role in it.
Cucumber is the world's most misunderstood collaboration tool and people need to hear this over and over again.
Patterns for slick database applicationsSkills Matter
Slick is Typesafe's open source database access library for Scala. It features a collection-style API, compact syntax, type-safe, compositional queries and explicit execution control. Community feedback helped us to identify common problems developers are facing when writing Slick applications. This talk suggests particular solutions to these problems. We will be looking at reducing boiler-plate, re-using code between queries, efficiently modeling object references and more.
If information stewards and custodians are to collect, create, appraise, preserve, store, use and access sophisticated, flexible, responsive and future- friendly content at scale, then they will have to think strategically about who's going to use the content, how and where they're going to consume it. COPE – Create Once, Publish Everywhere - is an acronym that describes how content should be conceived once and then disseminated through multiple conduits. The goal of COPE is to capture all content (text, media), context and metadata in a single manner, and then ensure that this content can be accessed and used across a range of publishing platforms.
More than ever, we need to learn how to harness the power of networks to tackle the complex issues we're facing as a society. Here's a quick guide to the basics of social network analysis.
Interested? Sign up at http://kumu.io
About the Webinar
The library and cultural institution communities have generally accepted the vision of moving to a Linked Data environment that will align and integrate their resources with those of the greater Semantic Web. But moving from vision to implementation is not easy or well-understood. A number of institutions have begun the needed infrastructure and tools development with pilot projects to provide structured data in support of discovery and navigation services for their collections and resources.
Join NISO for this webinar where speakers will highlight actual Linked Data projects within their institutions—from envisioning the model to implementation and lessons learned—and present their thoughts on how linked data benefits research, scholarly communications, and publishing.
Speakers:
Jon Voss - Strategic Partnerships Director, We Are What We Do
LODLAM + Historypin: A Collaborative Global Community
Matt Miller - Front End Developer, NYPL Labs at the New York Public Library
The Linked Jazz Project: Revealing the Relationships of the Jazz Community
Cory Lampert - Head, Digital Collections , UNLV University Libraries
Silvia Southwick - Digital Collections Metadata Librarian, UNLV University Libraries
Linked Data Demystified: The UNLV Linked Data Project
In search of lost knowledge: joining the dots with Linked Datajonblower
These slides are from my seminar to the University of Reading Department of Meteorology, November 2013. They contain a (hopefully not very technical) introduction to the concepts of Linked Data and how we are applying them in the CHARMe project (http://www.charme.org.uk). In CHARMe we are using Open Annotation to connect users of climate data with community-generated "commentary information" that helps them to understand a dataset's strengths and weaknesses.
The slide notes contain some helpful context, so you might like to download the PPT file!
The slides are licensed as "Creative Commons Attribution 3.0", meaning that you can do what you like with these slides provided that you credit the University of Reading for their creation. See http://creativecommons.org/licenses/by/3.0/.
Opening up and linking data is becoming a priority for many data producers because of institutional requirements, or to consume data in newer applications, or simply to keep pace with current development. Since 2014, this priority has gaining momentum with the Global Open Data in Agriculture and Nutrition initiative (GODAN). However, typical small and medium-size institutions have to deal with constrained resources, which often hamper their possibilities for making their data publicly available. This webinar will be of interest to any institution seeking ways to publish and curate data in the Linked Data World.
Research into Practice case study 2: Library linked data implementations an...Hazel Hall
The research underlying this presentation explored the role that libraries play in the linked data context. Focusing on European national libraries and Scottish libraries, multiple data gathering methods and constant comparative analysis were applied in the study. Amongst the findings, a general lack of awareness within the library community of the Semantic Web and the implications of linked data was identified. At the same time, there is recognition that linked data augments the discoverability and enhances the interoperability of library data. The presentation will include recommendations for the application of the findings of this research in practice.
Brief overview of open data, big data and sharing data ; discussion followed (based on Alastair Croll's presentation at ALA). robin fay @georgiawebgurl ; peter murray (lyrasis)
5 things cucumber is bad at by Richard LawrenceSkills Matter
This talk will look at 5 things Cucumber’s bad at, why that’s a good thing, and what it tells us about Cucumber’s sweet spot in a team’s toolkit.
Many times, when people complain about something Cucumber’s not good at, they’re unwittingly describing something Cucumber shouldn't be good at. They’re revealing that they don’t quite understand BDD and Cucumber’s role in it.
Cucumber is the world's most misunderstood collaboration tool and people need to hear this over and over again.
Patterns for slick database applicationsSkills Matter
Slick is Typesafe's open source database access library for Scala. It features a collection-style API, compact syntax, type-safe, compositional queries and explicit execution control. Community feedback helped us to identify common problems developers are facing when writing Slick applications. This talk suggests particular solutions to these problems. We will be looking at reducing boiler-plate, re-using code between queries, efficiently modeling object references and more.
Scala e xchange 2013 haoyi li on metascala a tiny diy jvmSkills Matter
Metascala is a tiny metacircular Java Virtual Machine (JVM) written in the Scala programming language. Metascala is barely 3000 lines of Scala, and is complete enough that it is able to interpret itself metacircularly. Being written in Scala and compiled to Java bytecode, the Metascala JVM requires a host JVM in order to run.
The goal of Metascala is to create a platform to experiment with the JVM: a 3000 line JVM written in Scala is probably much more approachable than the 1,000,000 lines of C/C++ which make up HotSpot, the standard implementation, and more amenable to implementing fun features like continuations, isolates or value classes. The 3000 lines of code gives you:
The bytecode interpreter, together with all the run-time data structures
A stack-machine to SSA register-machine bytecode translator
A custom heap, complete with a stop-the-world, copying garbage collector
Implementations of parts of the JVM's native interface
Although it is far from a complete implementation, Metascala already provides the ability to run untrusted bytecode securely (albeit slowly), since every operation which could potentially cause harm (including memory allocations and CPU usage) is virtualized and can be controlled. Ongoing work includes tightening of the security guarantees, improving compatibility and increasing performance.
ENJOYIN
Progressive f# tutorials nyc dmitry mozorov & jack pappas on code quotations ...Skills Matter
Code Quotations: Code-as-Data for F#
This tutorial will cover F# Code Quotations in-depth. You'll learn what Code Quotations are, how to use them, and where to apply them in your applications. We'll work through several real-world examples to highlight the important features -- and potential pitfalls -- of Code Quotations.
Cukeup nyc ian dees on elixir, erlang, and cucumberlSkills Matter
Elixir, Erlang, and Cucumberl
Elixir is a new Ruby-inspired programming language that uses the powerful concurrent machinery of Erlang behind the scenes. Cucumberl is a port of Cucumber to Erlang. Let's see what happens when we put them together.
In this talk, we'll discuss:
How Erlang's concurrency makes it easier to write robust programs
Elixir's approachable syntax
How to test Erlang and Elixir programs using Cucumberl
Attendees will walk away with a solid introduction to the principles of Erlang, and an appreciation of the way Elixir brings the joy of Ruby to the solidity of the Erlang runtime.
Cukeup nyc peter bell on getting started with cucumber.jsSkills Matter
Cukeup NYC. Peter Bell on Getting started with cucumber.js
Ever wished you could use cucumber in your javascript apps? In this talk we'll look at the current state of play of cucumber js, when you should and shouldn't use it, and how to get started writing your step definitions in javascript.
Agile testing & bdd e xchange nyc 2013 jeffrey davidson & lav pathak & sam ho...Skills Matter
In this engaging experience report, we will present 3 different views – Developer, Tester, Business Analyst – of implementing Acceptance Test Driven Development in a complex, data-driven domain. Hear how we used ATDD for building a ubiquitous language across the entire team, promoting faster feedback, and cultivating a culture where product owners were deeply invested in the quality of both every deliverable and the system as a whole.
Progressive f# tutorials nyc rachel reese & phil trelford on try f# from zero...Skills Matter
In this tutorial, Phil and Rachel will introduce you to the Try F# samples giving you exposure to, and an understanding of, how F# tackles some real-world scenarios. We'll help you explore, generate, and just play around with code samples, as well as talk you through some of the key principles of F#. By the end of this session, you'll have gone from zero to data science in only a few hours!
Progressive f# tutorials nyc don syme on keynote f# in the open source worldSkills Matter
F# is a powerful open-source language which Microsoft, other companies and the F# community all contribute to. In this talk, Don will discuss how the “F# space” has recently opened up significantly in interesting ways. F# now includes contributions that range from Cloud IDE platforms, Cloud Compute frameworks, Data interoperability components, Cross-platform execution, Try F#, MonoDevelop, and even Emacs editor integration with surprising tooling support, as well as the Visual F# tools from Microsoft and the broader NuGet package ecosystem. Don will also talk about some of the latest contributions from Microsoft Research, including new type provider components for F#, and describe how his team work with the Visual F# team and other teams around Microsoft. There will also be demos of some fun new stuff that’s been going on with F# at MSR and the community.
Agile testing & bdd e xchange nyc 2013 gojko adzic on bond villain guide to s...Skills Matter
Would you like to learn how to make your software testing practices more effective? And how to use your testing strategy to better capture and reflect customer requirements? Gojko Adzic takes a critical look at the effectiveness of current software testing practices and proposes strategies to make it much more effective.
Dmitry mozorov on code quotations code as-data for f#Skills Matter
Code Quotations: Code-as-Data for F#
This tutorial will cover F# Code Quotations in-depth. You'll learn what Code Quotations are, how to use them, and where to apply them in your applications. We'll work through several real-world examples to highlight the important features -- and potential pitfalls -- of Code Quotations.
Simon Peyton Jones: Managing parallelismSkills Matter
If you want to program a parallel computer, it obviously makes sense to start with a computational paradigm in which parallelism is the default (ie functional programming), rather than one in which computation is based on sequential flow of control (the imperative paradigm). And yet, and yet ... functional programmers have been singing this tune since the 1980s, but do not yet rule the world. In this talk I’ll say why I think parallelism is too complex a beast to be slain at one blow, and how we are going to be driven, willy-nilly, towards a world in which side effects are much more tightly controlled than now. I’ll sketch a whole range of ways of writing parallel program in a functional paradigm (implicit parallelism, transactional memory, data parallelism, DSLs for GPUs, distributed processes, etc, etc), illustrating with examples from the rapidly moving Haskell community, and identifying some of the challenges we need to tackle.
6. HOW SERENDIPITY HELPS
• Many new inventions occur because related information
crosses conventional boundaries, leaving it’s ghetto.
• Ourlives are made richer by discovering ideas and
experiences outside our comfort zones and habitual patterns.
• Serendipity
accelerates information discovery by making new
and unexpected connections.
9. WHO NEEDS SERENDIPITY?
• B2B Sites - encourages businesses to find ways of collaborating they may
never have thought of.
• Social sites - let people discover new friends and new interests.
• Collaborative software - find projects that could work together in
unexpected ways.
• Document management - find documents that help you look at your work
in a different way?
• Contact management - find new people who you could do business with
that might not be in a narrowly defined field.
20. GET CONNECTED
• Contextually isolated systems only show us information regarding a closed set of
data and activities.
• Semantically isolated systems only show us information which is similar to other
information.
• Content connected systems show us data that relates to each other which can
crosses weakening contextual and semantic boundaries.
• Socially connected systems show us information regarding our friends and their
activities, weakening contextual and semantic boundaries.
• Highly connected systems show us information with n-degrees of separation and
multiple paths across contextual and semantic boundaries.
37. RDMS VS GRAPH
• Highly
connected systems can be modelled relatively easily on
an RDMS, but adding new relationships creates complexity
and must be planned in advance.
• Queryingis easier for semantically and contextually isolated
models on an RDMS.
• Querying is extremely messy (indeed!) for highly connected
models.
41. RDMS VS GRAPH
• Multiple
hop queries are horrific under an RDBMS in both
performance pitfalls and legibility of queries.
• Graph databases love multiple hop logic and one can say
thrive upon it. It’s much easier to find out related items
through arbitrary degrees of separation and semantic barriers.
43. WEIGHT & FILTER
• Proximitystill matters, information should be closely
connected if not semantically or contextually related.
• Relevancy should relate to frequency.
• Filtering
can be done manually by users choosing what to
recommend or pass on.
• If possible use customer feedback to adjust weighting.
44. RDMS VS GRAPH
• RDMS cannot categorise relationships independently of the
content for example ‘like’, ‘owns’, ‘has viewed’.
• RDMS cannot add meta-data to the relationship to help
ranking of the relevancy.
• Graph databases can do both these and can quickly calculate
the cost of traversing to an item of content.
57. RE-TWEETS
• Re-tweets allow rapid dissemination of information beyond a
limited social group, they cross semantic and contextual
boundaries.
• Re-tweets can be (and are often) re-tweeted, allowing multiple
hops.
• Other Twitter users act as the filters, and we further weight by
reputation.
59. WHAT SERENDIPITY ISN’T!
• Random; random combinations of information are just noise.
putting teflon on a dolphin’s nose would not be a useful
contribution to society. Don’t confuse unexpected with random!
• Accidental; serendipitycomes from an attentive, and often
intuitive mind receiving diverse information.
• Luck; serendipity
is a cognitive process that creates new
connections between previously unrelated concepts and realises
the value in them.
60. THREE STEPS TO SERENDIPITY
• Remove Isolation. Relationships are low cost and can be
added to data at any point, so create them and create as many
as possible ignoring contextual or semantic boundaries.
• UseMultiple Hops. Cross semantic and contextual
boundaries when providing relevancy.
• Weight and Filter. The value of the information found
should relate to the route traversed. Allow users to manually
pass on information to others.
61. CODING SERENDIPITY
How can we add serendipity into our systems?
• Information must be able to travel freely between users.
• Information should be able to travel multiple levels of
indirection with ease.
• Information
should have the maximum number of inter-
connections across semantic boundaries.
• Information
relationships should be categorised and potentially
contain meta-data required for weighting.
62. HOW NEO4J HELPS
• Relationships
are created trivially at low cost at any time with
no regards to semantic boundaries.
• Connected information over many hops can be retrieved
quickly using Node#traverse or the Traversal framework.
• Relationships
can have both types and properties making
weight and filter calculations easy.
63. TAKE AWAY
• Create more relationships.
• Let information cross contextual and semantic boundaries.
• Make sure relevancy is probabilistic, not deterministic.
• Serendipity is not accidental, random or lucky!
• Themore heterogeneous and connected your data becomes,
the more you should consider Neo4j.
65. AUTOMATIC WEIGHT&FILTER
• Sum the ‘weight’ of each relationship traversed to the node.
• Find a random number between 0 and that weight.
• Order the discovered nodes by this random value.
• Choose the nodes with the nth lowest values.
• Byusing random numbers we increase serendipity without
sacrificing relevance.
67. OTHER EXAMPLES
• Research papers are a semantically arranged collection of information and
therefore create semantic isolated areas of information.
• A lending library is another semantically isolated collection of information.
• A project management website creates a contextually isolated set of
information.
• The internet is a highly connected disorganised information storage system
- which leads to a fair amount of serendipity. How many interesting things
have you ‘stumbled upon’ on the internet, but it still has a tendency to have
semantic or contextual silos. There’s still a lot of room for improvement.
Editor's Notes
\n
\n
\n
\n
\n
\n
\n
We’ll come back to these forms a little later.\n
\n
So how can we encourage serendipity?\n
\n
Semantically related information such as science books, art books and cookery books are unlikely to refer to each other, keeping the information isolated by it’s semantics. When these boundaries are crossed we get some of the inventions we saw earlier.\n
Contextually isolated information is separated by the context the information was created in; i.e. it belongs to a single user, a single team, company, project. Anything that links information together into a closed network. When scientists, companies, teams and people communicate their work or interests great things also happen.\n
The internet broke away from these two information ghettos by joining documents together on the internet, so our information could be connected.\n
We’ve now moved forward into the socially connected era where our systems now encourage the spread of information by users, we share, recommend and forward.\n
\n
But we can go a stage further, highly connected systems need to not just connect information but people and information in arbitrary combinations - further more we need to allow this information to travel in real time across these links. \n
History shows that when we allow information to flow fast and freely in society we see revolutions in science and spirituality. As our collective understanding increases so does the welfare of the individual and society. So it is with information systems, by increasing the flow of information we increase the value to all those using it.\n
Whenever information doesn’t flow, ignorance takes over and clearly we all suffer for that.\n
So recommendation number one, increase connectivity.\n
But our storage systems affect how connected we make the world\n
File based systems basically encourages us to dump stuff together, but don’t encourages us to think how it interconnects. So we end up seeing the world as ....\n
\n
\n
Relational databases help us to organise and connect related information in a highly organised formal manner, like ....\n
\n
\n
\n
Whereas graph databases or more like ....\n
\n
\n
\n
\n
\n
\n
\n
\n
We also need data to escape it’s ghettos and a way we can do this is to potential allow information to travel arbitrary degrees of separation, for example like emails or tweets. Not just manually like viral marketing, but also automatically - in status updates, suggested content etc.\n
We see this already in recommend a friend....\n
Or related documents, but the key here is to allow multiple hops across all boundaries, semantic and contextual.\n
Multiple hop queries are horrific under an RDBMS in both performance pitfalls and legibility of queries. This is the main reason RDMS systems rarely help the spread of information by automatic means and rely on users passing on information instead.\n\n
But we don’t want just any old information, we still need to filter according to relevancy.\n
But the key I believe when automating relevancy is not to use relevancy as a fixed one off judgement on whether something is visible or not, rather to use it as an indicator of the likelihood the information will be visible.\n
\n
\n
\n
\n
\n
\n
In a semantically isolated example, books would be written about how teflon helps in fishing. Meanwhile frying pans are only of interest to the catering industry and would not have references to fishing equipment.\n
In a contextually isolated system Marc would have been busy using teflon for his fishing equipment and never mentioned it to his wife.\n
Luckily they talked to each other.\n
Now in this system which is not at this point highly connected information was able to travel multiple hops as Marc discussed his fishing equipment and his wife saw the potential application \n
\n
Now we have a highly connected system that has crossed social and semantic boundaries, how long did it take before we had teflon baking trays, cake tins etc. Once a semantic boundary has been broken the process accelerates and the speed at which other boundaries are broken increases.\n
Re-tweets traverse a graph with ‘n’ degrees of separation I can be looking at how to increase the viral nature of my new startup. When I notice a tweet about the use of landing pages - which leads me to write a viral landing page. Such a collaboration is serendipitous, it is unintentional but beneficial and rewarding.\n\n
Re-tweets allow rapid dissemination of information beyond a limited social group. Because of the 5 degrees of separation on Twitter, a single tweet can reach the entire 200 million user base within minutes. As shown by Osama Bin Laden’s death.\n
Please can you swap forms with one other person .... now the information on those forms is closely related to you because most of the people in the room have similarity in the backgrounds. However it’s outside of your pre-defined social group and the common semantical links between people here. For your homework I’d like you to watch that movie, listen to that music and take a look at that technology!\n
\n
-- Weight and Filter -> Whether they recommend, make favourite lists or send as a message. Maintain the source of the information for future automatic recommendations. Keep it connected.\n