Cloud1: the set up for the Cloud of Knowing project in 2009
Upcoming SlideShare
Loading in...5
×
 

Cloud1: the set up for the Cloud of Knowing project in 2009

on

  • 134 views

the first deck in the Cloud series for a meeting called Cloud1. This presentation lays down the principles for the open source Cloud of Knowing project. Chris Arning presented a deck about the ...

the first deck in the Cloud series for a meeting called Cloud1. This presentation lays down the principles for the open source Cloud of Knowing project. Chris Arning presented a deck about the semantic web. There were 4 of us who went! It was an inauspicious start for a group which drew in some seriously bright people and got asked some fundamental questions. There was no budget, no confidentiality clause. Any presentation made was put on a webjam so anyone could see it. That's how the project began.

Statistics

Views

Total Views
134
Views on SlideShare
134
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Cloud1: the set up for the Cloud of Knowing project in 2009 Cloud1: the set up for the Cloud of Knowing project in 2009 Presentation Transcript

  • Cloud of Knowing set up
  • Analysis and Interpretation: The cloud of knowing how it started
  • Can they be brought together? Text Analytics Research View slide
  • E-Anthropology E-ethnography Is it bricolage? Ie an acceptable Methodology? Or is it picking dead stuff ‘because it looks interesting’? View slide
  • Textual analysis on the web – how good is it?     RSS feeds Tagging Key words searches Dodgy social media measurement techniques  Eg deduct neutral tweets and subtract negative tweets from positive ones. That’s the score!!
  • Some questions about the originators        Who are they, where are they from and how did they come to post this content? What did they mean? Which audience were they writing for? What is the context? Are they being paid? Is this their genuine opinion or are they stooges – marketing constructs? Who do they represent?
  • Themes to resolve      How to source and structure textual web content (and other media forms as well) How to validate it What if we worked with other web users what would we get them to do? How do we sort and grade content? How do we sort and grade web users?
  • Grab bag of issues     Sourcing – RSS feeds Hunting – recruiting webusers to go taggin Grading – recruiting webusers to grade what comes in Profiling – using scoring models as in direct marketing profiles
  • Open source approach  Work on it together because the issues are too big for any one research agency
  • Open source approach  Work on it together because the issues are too big for any one research agency