• Save
SiLCC Overview
Upcoming SlideShare
Loading in...5
×
 

SiLCC Overview

on

  • 3,429 views

SiLCC is a cloud based service for parsing text and extracting relevant keywords. To use it, you must first apply for an API key. Input the API key into your application and then push content to our ...

SiLCC is a cloud based service for parsing text and extracting relevant keywords. To use it, you must first apply for an API key. Input the API key into your application and then push content to our server. As we receive your content, we parse it, extract relevant 'tags', then send it back to your app. From there user interaction with those tags (editing or removal) helps to improve our algorithms.

SiLLC also features robust glossaries for Twitter pico-formats and SMS txtSpeak. It specializes in the semantic tagging of content that's 280 characters and less.

Statistics

Views

Total Views
3,429
Views on SlideShare
3,380
Embed Views
49

Actions

Likes
5
Downloads
0
Comments
0

6 Embeds 49

http://www.slideshare.net 40
http://hitekhedhelp.blogspot.com 4
https://twitter.com 2
http://twitter.com 1
http://thisisnotmycountry-ijustlivehere.blogspot.com 1
http://hitekhedhelp.blogspot.in 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

CC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

SiLCC Overview SiLCC Overview Presentation Transcript

  • SWIFT RIVER Verifying and Filtering the Crowd An Ushahidi Initiative by Neville Newey and Jon Gosier
  • How do we manage it all?
  • SWIFT IS THE FILTER
  • SWIFTRIVER IS FOR... Improving information findability Surfacing content you didn't know you were looking for Understanding media from other parts of the world (translation) Making urgent data more discoverable (structured, published and accessible) Verifying eyewitness accounts Using location as context Expanding the grassroots reporting network Preserving information (archiving)
  • SILCC SwiftRiver Language Computation Core Web Services
  • WHAT IS SILCC? •What is SiLCC •Where does it fit in? •Goals •Limitations •Status
  • WHAT IS SILCC? •Swift Language Computation Component •One of the SwiftRiver Web Services •Open Web API •Semantic Tagging of Short Text •Multilingual •Multiple sources (twitter, email, SMS, blogs etc) •Active Learning capability •Open Source •Easy to Deploy, Modify and Run
  • Swiftriver    SiLCC  Dataflow     SiSLS   Content   Items   coming   from   the   SiSLS   have     where   Swiftriver  Source     SiSLS   integrations   is   enabled     global   trust   values   Library  Service   added  to  the  object  model.     SiLCC   Swiftriver  Language   An  API  key  is  sent  along  with  the  text  to  ensure  that   the  SiLCC  is  not  open  to  any  malicious  usage.     Computational  Core     The  text  of  the   content  is  sent  to  the   SiLCC.   There  is  still  a  bit  of  ambiguity  around  what  the  NLP   should  extract  from  the  text  but  at  its  most  simple,   Using  NLP,  the  SiLCC   all  the  nouns  would  be  a  good  start.   extracts  Nouns  and   other  keywords  from   the  text.   The  SiLCC  send  back   The   lists   of   tags   sent   back   from   the   SiLCC   can   be   a  list  of  tags  that  are   added  to  the  Content  Item  along  with  any  that  were   added  to  the   extracted  from  the  source  data  by  the  parser.   Content  Item   SLISa   Although   the   NLP   tags   have   now   been   applied,   the   SLISa   is   now   responsible   for   applying   instance   Swiftriver  Language   specific  tagging  corrections.   Improvement  Service    
  • OUR GOALS •Simple Tagging of short snippets of text •Rapid tagging for high volume environments •Simple API, easy to use •Learns from user feedback •Routing of messages to upstream services •Semantic Classification •Sorts rapid streams into buckets •Clusters like messages •Visual effects •Cross-referencing
  • WHAT IT’S NOT •Does not do deep analysis of text •Only identifies words within original text
  • HOW DOES IT WORK? •Step 1: Lexical Analysis •Step 2: Parsing into constituent parts •Step 3: Part of Speech tagging •Step 4: Feature extraction •Step 5: Compute using feature weights •Lets examine each one in turn...
  • STEP 1: LEXICAL ANALYSIS •For news headlines, email subjects this is trivial, just split on spaces. •For Twitter this is more complex...
  • TWEET ANALYSIS •Tweets are surprisingly complex •Only 140 characters but many features •Emergent features from community (e.g. hashtags) •Lets take a look at a typical tweet...
  • TWEET ANALYSIS The typical Tweet: “RT @directrelief: RT @PIH: PBS @NewsHour addresses mental health needs in the aftermath of the #Haiti earthquake #health #earthquake... http://bit.ly/bNhyK6” •RT indicates a “re-tweet” •@name indicates who the original tweeter was •Multiple embedded retweets •Hashtags (e.g. #Haiti) can play two roles, as a tag and as part of the sentence
  • TWEET ANALYSIS 2 •Two or more hashtags within a tweet (e.g. #health and #earthquake) •Continuation dots “...” indicates that there was more text that didn’t fit into the 140 limit somewhere in it’s history •Urls many tweets contain one or more urls As we can see this simple tweet contains no less than 7 different features and that’s not all!
  • TWEET ANALYSIS 3 We want to break up the tweet into the following parts: { 'text': ['PBS addresses mental health needs in the aftermath of the Haiti earthquake'], 'hashtags': ['#Haiti', '#health', '#earthquake'], 'names': ['@directrelief', '@PIH', '@NewsHour'], 'urls': ['http://bit.ly/bNhyK6'], }
  • TWEET ANALYSIS 4 Why do we want to break up the tweet into parts (parsing)? •Because we want to further process the grammatically correct english text •Part of speech tagging would otherwise be corrupted by words it cannot recognize (e.g. urls, hashtags, @names etc.) •We want to save the hashtags for later use •Many of the features are irrelevant to the task of identifying tags (e.g. dots, punctuation, @name, RT)
  • TWEET ANALYSIS 5 •We now take the “text” portion of the tweet and perform part of speech tagging on it •After part of speech tagging, we perform feature extraction •Features are now passed through the keyword classifier which returns a list of keywords / tags •Finally we combine these tags with the hashtags we saved earlier to give the complete tag set
  • HEADLINE AND EMAIL SUBJECT ANALYSIS •This is much simpler to do •Its a subset of the steps in Tweet Analysis •There is no parsing since there are no hashtags, @names etc.
  • FEATURE EXTRACTION • For the active learning algorithm we need to extract features to use in classification • These features should be subject/domain independent • We therefore never use the actual words as features • This would for example give artificially high weights to words such as “earthquake” • We don't want these artificial weights as we can’t foresee future disasters and we want to be as generic with classification as possible • The use of training sets does allow for domain customization if where necessary
  • FEATURE EXTRACTION • Capitalization of individual words: Either first caps, or all caps, this is an important indicator of proper nouns or other important words that make good tag candidates • Position in text: Tags seem to have a greater preponderance near the beginning of text • Part of Speech: Nouns and proper nouns are particularly important but so are some adjectives and adverbs • Capitalization of entire text: sometimes the whole text is capitalized and this should reduce overall weighting of other features • Length of the text: In shorter texts the words are more likely to be tags • The parts of speech of previous and next words (effectively this means we are using trigrams; or a window of 3)
  • TRAINING • Requires user reviewed examples • Lexical analysis, parsing and feature extraction on the examples • Multinomial naïve Bayes algorithm • NB: The granularity we are classifying is at the word level • For each word in the text, we classify it as either a keyword or not • This has pleasant side effect of providing several training examples from each user reviewed text • Even with less than 50 reviewed texts the results are comparable to the simple approach of using nouns only
  • ACTIVE LEARNING •The API also provides a method for users to send back corrected text •The corrected text is saved and then used in the next iteration of training •User may optionally specify a corpus for the example to go into •Training can be performed using any combination of corpora
  • DEVELOPER FRIENDLY •Two levels of API, the web API and the internal Python API •Either one may be used but most users will use the web API •Design is highly modular and maintainable •For very rapid backend processing the native Python API can be used
  • PYTHON CLASSES Most of the classes that make up the library are divided into three types: 1) Tokenizers 2) Parsers 3) Taggers All three types have consistent API's and are interchangeable.
  • PYTHON API •A tagger calls a parser •A parser calls a tokenizer •Output of the tokenizer goes into the parser •Output of the parser goes into the tagger •Output of the tagger goes into the user!
  • CLASSES • BasicTokenizer – This is used for splitting basic (non-tweet) text into individual words • TweetTokenizer – This is used to tokenize a tweet, it may also be used to tokenize plain text since plain text is a subset of tweets • TweetParser – Calls the TweetTokenizer and the parses the output (see previous example) • TweetTagger – Calls the TweetTokenizer and then tags the output of the text part and adds the hashtags • BasicTagger – Calls the BasicTokenizer and then tags the text, should only be used for non-tweet text, uses simple Part of Speech to identify tags • BayesTagger – Same as BasicTagger but uses weights from the naïve Bayes training algorithm
  • DEPENDANCIES •Part of speech tagging is currently performed by the Python NLTK •The Web API uses the Pylons web framework
  • CURRENT STATUS •Tag method of API is ready for use, individual deployments can choose between using the BasicTagger or the BayesTagger •Tell method (for user feedback) will be ready by the time you read this! •Training is possible on corpora of tagged data in .csv format (see examples in distribution)
  • CURRENT LIMITATIONS •Only English text is supported at the moment •Tags are always one of the words in the supplied text ie they can never be a word not in the supplied text •Very few training examples exist at the moment
  • FUTURE WORK •Multilingual, use non-english part of speech taggers •UTF8 compatible •Experiment with different learning algorithms (e.g. neural networks) •Perform external text analysis (e.g. if there is a url, analyze the text in the url as well as in the tweet) •Allow users to specify required density of tags
  • SWIFT RIVER jon@ushahidi.com http://swift.ushahidi.com http://github.com/appfrica/silcc An Ushahidi Initiative by Neville Newey and Jon Gosier