Measurement and modeling of the web and related data sets
Upcoming SlideShare
Loading in...5
×
 

Measurement and modeling of the web and related data sets

on

  • 1,885 views

- Web Measurement

- Web Measurement
- Self similarity on the web
- Extraction of information from large graphs
- A word on evolution

Statistics

Views

Total Views
1,885
Views on SlideShare
1,882
Embed Views
3

Actions

Likes
0
Downloads
28
Comments
0

1 Embed 3

http://www.slideshare.net 3

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Measurement and modeling of the web and related data sets Measurement and modeling of the web and related data sets Presentation Transcript

  • IMA Tutorial (part II): Measurement and modeling of the web and related data sets Andrew Tomkins IBM Almaden Research Center May 5, 2003 Title slide
  • Setup
    • This hour: data analysis on the web
    • Next hour: probabilistic generative models, particularly focused on models that generate distributions that are power laws in the limit
  • Context
    • Data Analysis on the web…
    • … as a hyperlinked corpus
    • Note: Many areas of document analysis are highly relevant to the web, and should not be ignored (but will be):
      • Supervised/unsupervised classification (Jon – combinatorial side)
      • Machine learning (Jon – a little)
      • Information retrieval (Jon – dimensionality reduction)
      • Information extraction
      • NLP
      • Discourse analysis
      • Relationship induction
      • etc
  • Focus Areas
    • Web Measurement
    • Self similarity on the web
    • Extraction of information from large graphs
    • A word on evolution
  • One view of the Internet: Inter-Domain Connectivity
    • Core: maximal clique of high-degree nodes
    • Shells: nodes in 1-neighborhood of core, or of previous shell, with degree > 1
    • Legs: 1-degree nodes
    Core Shells: 1 2 3 [Tauro, Palmer, Siganos, Faloutsos, 2001 Global Internet]
  • Another view of the web: the hyperlink graph
    • Each static html page = a node
    • Each hyperlink = a directed edge
    • Currently ~10 10 nodes (mostly junk), 10 11 edges
  • Getting started – structure at the hyperlink level
    • Measure properties of the link structure of the web.
    • Study a sample of the web that contains a reasonable fraction of the entire web.
    • Apply tools from graph theory to understand the structure.
    [Broder, Kumar, Maghoul, Raghavan, Rajagopalan, Stata, Tomkins, Wiener, 2001]
  • Terminology
    • SCC – strongly connected component
    • WCC – “weakly connected component” – connected component in the underlying undirected graph
  • Data
    • Altavista crawls, up to 500M pages
    • Ran strong and weak connected component algorithms
    • Ran random directed breadth-first searches from 1000 starting nodes, both forwards and backwards along links
  • Breadth-first search from random starts
    • How many vertices are reachable from a random vertex?
  • A Picture of (~200M) pages.
  • Some distance measurements
    • Pr[ u reachable from v ] ~ 1/4
    • Max distance between 2 SCC nodes: 28
    • Max distance between 2 nodes (if there is a path) > 900
    • Avg distance between 2 SCC nodes: 16
  • Facts (about the crawl).
    • Indegree and Outdegree distributions satisfy the power law. Consistent over time and scale.
    The distribution of indegrees on the web is given by a Power Law --- Heavy-tailed distribution, with many high-indegree pages (eg, Yahoo)
  • Analysis of power law Pr [ page has k inlinks ] =~ k -2.1 Pr [ page has > k inlinks ] =~ 1/ k Pr [ page has k outlinks ] =~ k -2.7 Corollary:
  • Component sizes.
    • Component sizes are distributed by the power law.
  • Other observed power laws in the web
    • Depths of URLs
    • Sizes of sites
    • Eigenvalues of adjacency matrix of hyperlink graph [Mihail and Papadimitriou shed some light here]
    • Many different traffic measures
    • Linkage between hosts and domains
    • Many of the above measures on particular subsets of the graph
    [Faloutsos, Faloutsos, Faloutsos 99] [Bharat, Chang, Henzinger, Ruhl 02]
  • More Characterization: Self-Similarity
  • Ways to Slice the Web
    • Domain (*.it)
    • Host (www.ibm.com)
    • Geography (pages with a geographical reference in the Western US)
    • Content
      • Keyword: Math, subdivided by Math Geometry
      • Keyword: MP3, subdivided by MP3 Napster
    We call these slices “Thematically Unified Communities”, or TUCs
  • Self-Similarity on the Web
    • Pervasive: holds for all reasonable characteristics
    • Robust: holds for all reasonable slices
    • “ Theorem:”
      • TUCs share properties with the web at large
      • TUCs are linked by a “navigational backbone”
  • In particular…
    • All TUCs have:
      • Power laws for degree, SCC, and WCC distributions
      • Similar exponents for power laws
      • Similar “bow tie” structure
      • Large number of dense subgraphs
  • Is this surprising?
    • YES (for downsampling general graphs). Example:
    • This graph has 1 SCC containing all nodes
    • Remove any nonzero fraction of edges – graph has n components of size 1
    • Generally: random subset of size n 1/2 in a graph with O( n ) edges will have only constant number of edges
  • A structural explanation
    • Each TUC has a “bow tie” – how do they relate?
  • The Navigational Backbone Each TUC contains a large SCC that is well-connected to the SCCs of other TUCs
  • Information Extraction from Large Graphs
  • Overview WWW Distill KB1 KB2 KB3 Goal: Create higher-level "knowledge bases" of web information for further processing. [Kumar, Raghavan, Rajagopalan, Tomkins 1999]
  • Many approaches to this problem
    • Databases over the web:
      • Web SQL, Lore, ParaSite, etc
    • Data mining
      • A priori, Query flocks, etc
    • Information foraging
    • Community extraction
      • [Lawrence et al]
    • Authority-based search
      • HITS, and variants
  • General approach
    • It’s hard (though getting easier) to analyze the content of all pages on the web
    • It’s easier (though still hard) to analyze the graph
    • How successfully can we extract useful semantic knowledge (ie, community structure) from links alone?
  • Web Communities Fishing Outdoor Magazine Bill's Fishing Resources Linux Linux Links LDP Different communities appear to have very different structure.
  • Web Communities Fishing Outdoor Magazine Bill's Fishing Resources Linux Linux Links LDP But both contain a common “footprint”: two pages ( ) that both Point to three other pages in common ( )
  • Communities and cores Example K 2,3 Definition: A "core" K ij consists of i left nodes, j right nodes, and all left->right edges. Critical facts: 1. Almost all communities contain a core [expected] 2. Almost all cores betoken a community [unexpected]
  • Other footprint structures Newsgroup thread Web ring Corporate partnership Intranet fragment
  • Subgraph enumeration
    • Goal: Given a graph-theoretic "footprint" for structures of interest, find ALL occurrences of these footprints.
  • Enumerating cores a a belongs to a K 2,3 if and only if some node points to b1, b2, b3. b2 b1 b3 Inclusion/Exclusion Pruning Clean data by removing: mirrors (true and approximate) empty pages, too-popular pages, nepotistic pages Preprocessing When no more pruning is possible, finish using database techniques Postprocessing
  • Results for cores 3 5 7 9 0 20 40 60 80 100 Thousands i=3 i=4 i=5 i=6 Number of cores found by Elimination/Generation 3 5 7 9 0 20 40 60 80 Thousands i=3 i=4 Number of cores found during postprocessing
  • The cores are interesting (1) Implicit communities are defined by cores. (2) There are an order of magnitude more of these. (10 5+ ) (3) Can grow the core to the community using further processing. Explicit communities.
    • Yahoo!, Excite, Infoseek
    • webrings
    • news groups
    • mailing lists
    Implicit communities
    • japanese elementary schools
    • turkish student associations
    • oil spills off the coast of japan
    • australian fire brigades
  • Elementary Schools in Japan
    • The American School in Japan
    • The Link Page
    • ‰ ªèŽs—§ˆä“c¬ŠwZƒz[ƒ€ƒy[ƒW
    • Kids' Space
    • ˆÀéŽs—§ˆÀé¼•”¬ŠwZ
    • ‹ {é‹³ˆç‘åŠw•‘®¬ŠwZ
    • KEIMEI GAKUEN Home Page ( Japanese )
    • Shiranuma Home Page
    • fuzoku-es.fukui-u.ac.jp
    • welcome to Miasa E&J school
    • _“ސ쌧E‰¡•lŽs—§’†ì¼¬ŠwZ‚̃y
    • http://www...p/~m_maru/index.html
    • fukui haruyama-es HomePage
    • Torisu primary school
    • goo
    • Yakumo Elementary,Hokkaido,Japan
    • FUZOKU Home Page
    • Kamishibun Elementary School...
    • schools
    • LINK Page-13
    • “ ú–{‚ÌŠwZ
    • a‰„¬ŠwZƒz[ƒ€ƒy[ƒW
    • 100 Schools Home Pages (English)
    • K-12 from Japan 10/...rnet and Education )
    • http://www...iglobe.ne.jp/~IKESAN
    • ‚ l‚f‚j¬ŠwZ‚U”N‚P‘g•¨Œê
    • ÒŠ—’¬—§ÒŠ—“Œ¬ŠwZ
    • Koulutus ja oppilaitokset
    • TOYODA HOMEPAGE
    • Education
    • Cay's Homepage(Japanese)
    • – y“쏬ŠwZ‚̃z[ƒ€ƒy[ƒW
    • UNIVERSITY
    • ‰ J—³¬ŠwZ DRAGON97-TOP
    • ŽÂ‰ª¬ŠwZ‚T”N‚P‘gƒz[ƒ€ƒy[ƒW
    • ¶µ°é¼ÂÁ© ¥á¥Ë¥å¡¼ ¥á¥Ë¥å¡¼
  • So…
    • Possible to extract order-of-magnitude more communities than currently known.
    • Few (4%) of these appear coincidental.
    • Entirely automatic extraction.
    • Open question: how to use implicit communities?
  • A word on evolution
  • A word on evolution
    • Phenomenon to characterize: A topic in a temporal stream occurs in a “burst of activity”
    • Model source as multi-state
    • Each state has certain emission properties
    • Traversal between states is controlled by a Markov Model
    • Determine most likely underlying state sequence over time, given observable output
    [Kleinberg02]
  • Example Time I’ve been thinking about your idea with the asparagus… Uh huh I think I see… Uh huh Yeah, that’s what I’m saying… So then I said “Hey, let’s give it a try” And anyway she said maybe, okay? Most likely “hidden” sequence: 0.005 1 2 0.01 State 1: Output rate: very low State 2: Output rate: very high Pr[2] ~ 10 Pr[2] ~ 10 Pr[2] ~ 7 Pr[2] ~ 2 Pr[2] ~ 5 Pr[2] ~ 2 Pr[2] ~ 5 Pr[1] ~ 2 Pr[1] ~ 1 Pr[1] ~ 2 Pr[1] ~ 10 Pr[1] ~ 5 Pr[1] ~ 10 Pr[1] ~ 1 2 2 2 1 1 1 1
  • More bursts
    • Infinite chain of increasingly high-output states
    • Allows hierarchical bursts
    • Example 1: email messages
    • Example 2: conference titles
  • Integrating bursts and graph analysis Wired magazine publishes an article on weblogs that impacts the tech community Newsweek magazine publishes an article that reaches the population at large, responding to emergence, and triggering mainstream adoption [KNRT03] Number of communities identified automatically as exhibiting “bursty” behavior – measure of cohesiveness of the blogspace Number of blog pages that belong to a community Number of blog communities
  • IMA Tutorial (part III): Generative and probabilistic models of data May 5, 2003 Title slide
  • Probabilistic generative models
    • Observation: These distributions have the same form:
      • Fraction of laptops that fail catastrophically during tutorials, by city
      • Fraction of pairs of shoes that spontaneously de-sole during periods of stress, by city
    • Conclusion: The distribution arises because the same stochastic process is at work, and this process can be understood beyond the context of each example
  • Models for Power Laws
    • Power laws arise in many different areas of human endeavor, the “hallmark of human activity”
    • (they also occur in nature)
    • Can we find the underlying process (processes?) that accounts for this prevalence?
  • An Introduction to the Power Law
    • Definition: a distribution is said to have a power law if Pr[ X >= x ]  cx 
    • Normally: 0<  <=2 (Var(X) infinite)
    • Sometimes: 0<  <=1 (Mean(X) infinite)
    Exponentially-decaying distribution Power law distribution
  • Early Observations: Pareto on Income
    • [Pareto1897] observed that the random variable I denoting the income of an individual has a power law distribution
    • More strongly, he observed that Pr[ X>x ] = ( x/k) 
    • For density function f , note that ln f ( x ) = (-  -1)ln( x ) + c for constant c
    • Thus, in a plot of log(value) versus log(probability), power laws display a linear tail, while Pareto distributions are linear always.
  • Early Observations: Yule/Zipf
    • [Yule26] observed (and explained) power laws in the context of number of species within a genus
    • [Zipf32] and [Estoup16] studied the relative frequency of words in natural language, beginning a cottage industry that continues to this day.
    • A “Yule-Zipf” distribution is typically characterized by rank rather than value:
      • The i th most frequent word in English occurs with probability proportional to 1/i .
    • This characterization relies on finite vocabulary
  • Early Observations: Lotka on Citations
    • [Lotka25] presented the first occurrence of power laws in the context of graph theory, showing a power law for the indegree of the citation graph
  • Ranks versus Values
    • Commonly encountered phrasings of the power law in the context of word counts:
      • Pr[word has count >= W ] has some form
      • Number of words with count >= W has some form
      • The frequency of the word with rank r has some form
    • The first two forms are clearly identical.
    • What about the third?
  • Equivalence of rank versus value formulation
    • Given: number of words occurring t times ~ t 
    • Approach:
      • Consider single most frequent word, with count T
      • Characterize word occurring t times in terms of T
      • Approximate rank of words occurring t times by counting number of words occurring at each more frequent count.
    • Conclusion: Rank- j word occurs  (c j + d)  times (power law)
    • But... high ranks correspond to low values – must keep straight the “head” and the “tail”
    [Bookstein90, Adamic99]
  • Early modeling work
    • The characterization of power laws is a limiting statement
    • Early modeling work showed approaches that provide the correct form of the tail in the limit
    • Later work introduced the rate of convergence of a process to its limiting distribution
  • A model of Simon
    • Following Simon [1955], described in terms of word frequences
    • Consider a book being written. Initially, the book contains a single word, “the.”
    • At time t , the book contains t words. The process of Simon generates the t+1 st word based on the current book.
  • Constructing a book: snapshot at time t When in the course of human events, it becomes necessary… Current word frequencies: Let f(i,t) be the number of words of count i at time t Count Word Rank 11,325 4,791 … 3 2 1 “ ...” “ ...” 5 “ necessary” 1 “ neccesary” … “ ...” 300 “ from” 600 “ of” 1000 “ the”
  • The Generative Model
    • Assumptions:
      • Constant probability that a neologism will be introduced at any timestep
      • Probability of re-using a word of count i is proportional to if(i,t) , that is, number of occurrences of count i words.
    • Algorithm:
      • With probability  a new word is introduced into the text
      • With remaining probability, a word with count i is introduced with probability proportional to if(i,t)
  • Constructing a book: snapshot at time t Current word frequencies: Let f(i,t) be the number of words of count i at time t Pr[“the”] = (1-  ) 1000 / K Pr[“of”] = (1-  ) 600 / K Pr[some count-1 word] = (1-  ) 1 * f(1,t) / K K =  if(i,t) Count Word Rank 11,325 4,791 … 3 2 1 “ ...” “ ...” 5 “ necessary” 1 “ neccesary” … “ ...” 300 “ from” 600 “ of” 1000 “ the”
  • What’s going on? One unique word (which occurs 1 or more times) 1 2 3 4 5 6 Each word in bucket i occurs i times in the current document … .
  • What’s going on? 1 With probability  a new word is introduced into the text 2 3 4 5 6
  • What’s going on? 1 4 How many times do words in this bucket occur? With probability 1-  an existing word is reused 2 3 5 6
  • What’s going on? 2 3 4 Size of bucket 3 at time t+1 depends only on sizes of buckets 2 and 3 at time t ? ? Must show: fraction of balls in 3 rd bucket approaches some limiting value
  • Models for power laws in the web graph
    • Retelling the Simon model: “preferential attachment”
      • Barabasi et al
      • Kumar et al
    • Other models for the web graph:
      • [Aiello, Chung, Lu], [Huberman et al]
  • Why create such a model?
    • Evaluate algorithms and heuristics
    • Get insight into page creation
    • Estimate hard-to-sample parameters
    • Help understand web structure
    • Cost modeling for query optimization
    • To find “surprises” means we must understand what is typical .
  • Random graph models G(n,p) Web indeg > 1000 k23's 4-cliques 0 0 0 100000 125000 many Traditional random graphs [Bollobas 85] are not like the web! Is there a better model?
  • Desiderata for a graph model
    • Succinct description
    • Insight into page creation
    • No a priori set of &quot;topics&quot;, but...
    • ... topics should emerge naturally
    • Reflect structural phenomena
    • Dynamic page arrivals
    • Should mirror web's &quot;rich get richer&quot; property, and manifest link correlation.
  • Page creation on the web
    • Some page creators will link to other sites without regard to existing topics, but
    • Most page creators will be drawn to pages covering existing topics they care about, and will link to pages within these topics
    Model idea: new pages add links by &quot;copying&quot; them from existing pages
  • Generally, would require…
    • Separate processes for:
      • Node creation
      • Node deletion
      • Edge creation
      • Edge deletion
  • A specific model
    • Nodes are created in a sequence of discrete time steps
      • e.g. at each time step, a new node is created with d  1) out-links
    • Probabilistic copying
        • links go to random nodes with probability 
        • copy d links from a random node with probability 1- 
  • Example New node arrives With probability  , it links to a uniformly-chosen page
  • Example To copy, it first chooses a page uniformly Then chooses a uniform out-edge from that page Then links to the destination of that edge (&quot;copies&quot; the edge) Under copying, your rate of getting new inlinks is proportional to your in-degree. With probability (1-  ), it decides to copy a link.
  • Degree sequences in this model Pr[page has k inlinks] =~ k Heavy-tailed inverse polynomial degree sequences. Pages like netscape and yahoo exist. Many cores, cliques, and other dense subgraphs (  = 1/11 matches web) -(2-  ) (1-  )
  • Model extensions
    • Component size distributions.
    • More complex copying.
    • Tighter lower tail bounds.
    • More structure results.
  • A model of Mandelbrot
    • Key idea: Generate frequencies of English words to maximize information transferred per unit cost
    • Approach:
      • Say word i occurs with probability p(i)
      • Set the transmission cost of word i to be log( i)
      • Average information per word: –  p(i) log(p(i))
      • Cost of a word with probability p(j): log (j)
      • Average cost per word:  p(j) log(j)
      • Choose probabilities p(i) to maximize information/cost
    • Result: p(j) = c j 
  • Discussion of Mandelbrot’s model
    • Trade-offs between communication cost ( log(p(j) ) and information.
    • Are there other tradeoff-based models that drive similar properties?
  • Heuristically Optimized Trade-offs
    • Goal: construction of trees (note: models to generate trees with power law behavior were first proposed in [Yule26])
    • Idea: New nodes must trade off connecting to nearby nodes, and connecting to central nodes.
    • Model:
      • Points arrive uniformly within the unit square
      • New point arrives, and computes two measures for candidate connection points j
        • d(j) : distance from new node to existing node j (“nearness”)
        • h(j) : distance from node j to root of tree (“centrality”)
      • New destination chosen to minimize  d(j) + h(j)
    • Result: for a wide variety of values of  , distribution of degrees has a power law
    [Fabrikant, Koutsoupias, Papadimitriou 2002]
  • Monkeys on Typewriters
    • Consider a creation model divorced form concerns of information and cost
    • Model:
      • Monkey types randomly, hits space bar with probability q , character chosen uniformly with remaining probability
    • Result:
      • Rank j word occurs with probability qj log(1-q)-1 = c j 
  • Other Distributions
    • “Power law” means a clean characterization of a particular property on distribution upper tails
    • Often used to mean “heavy tailed,” meaning bounded away from an exponentially decaying distribution
    • There are other forms of heavy-tailed distributions
    • A commonly-occurring example: lognormal distribution
  • Quick characterization of lognormal distributions
    • Let X be a normally-distributed random variable
    • Let Y = ln X
    • Then Y is lognormal
    • Properties:
      • Often occur in situations of multiplicative growth
      • Prop2
    • Concern: There is a growing sequence of papers dating back several decades questioning whether certain observed values are best described by power law or lognormal (or other) distributions.
  • One final direction…
    • The Central Limit Theorem tells us how sums of independent random variables behave in the limit
    • Example: ln X j = ln X 0 +  ln F j
    • X j well-approximated by a lognormal variable
    • Thus, lognormal variables arise in situations of multiplicative growth
    • Examples in biology, ecology, economics,…
    • Example: [Huberman et al]: growth of web sites
    • Similarly: the product The same result applies to the product of lognormal variables
    • Each of these generative models is evolutionary
    • What is the role of time?