5 Understanding Page Rank
Upcoming SlideShare
Loading in...5
×
 

5 Understanding Page Rank

on

  • 1,153 views

An introductory lecture on the Google PageRank algorithm stressing the mathematical underpinnings. Based on the excellent book by Langville & Meyer.

An introductory lecture on the Google PageRank algorithm stressing the mathematical underpinnings. Based on the excellent book by Langville & Meyer.

Statistics

Views

Total Views
1,153
Views on SlideShare
1,153
Embed Views
0

Actions

Likes
0
Downloads
20
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

5 Understanding Page Rank 5 Understanding Page Rank Presentation Transcript

  • Understanding Google’s PageRank™ Amy Langville, Carl Meyer, Google’s Page Rank and Beyond: The Science of Search Engine Rankings. Princeton University Pres, 2006
  • Review: The Search Engine
  • An Elegant Formula
    •  π  π   S + (1-  ) E)
    • Google’s (Brin & Page) PageRank™ equation.
    • US Patent #6285999, filed 1998, granted 2001
    • This formula resolves the world’s largest matrix calculation.
  •  π  π   S + (1-  ) E)
    • Derived from a formula B&P worked out in graduate school (itself derived from traditional bibliometrics research literature).
    • r(P i ) =
    • Essential characteristic: high-ranking pages associate with high-ranking pages
    r (P j ) |P j | _____  P j  B Pi
  •  π  π   S + (1-  ) E)
        • r(P i ) =
        • Must be applied to a set of linked pages, or a graph.
        • To do this we analyze the graph to see it’s out-links and back-links.
        • Therefore. . .
    r (P j ) |P j | _____ P j  B Pi  r(P i ) : the rank of a given page P j  B pi : the ranks of the set of back- linking pages r (P j ) : the rank of a given page |P j | : the number of out-links on a page
  •  π  π   S + (1-  ) E)
    • A site graph like this:
    1 2 3 5 4 6
  •  π  π   S + (1-  ) E)
    • becomes a directed graph like this:
    1 2 3 6 4 5
  • But there’s a problem
    • Nothing’s ranked!
    r (P j ) |P j | _____ P j  B Pi  r(P i ) : the rank of a given page P j  B pi : the ranks of the set of back- linking pages r (P j ) : the rank of a given page |P j | : the number of out-links on a page
        • r(P i ) =
    1 2 3 6 4 5
  • The solution. . . sort of
    • Start by assuming all the ranks are equal. In this example each page is just 1 of 6, so the initial rank is expressed as 1/6
    • Then, you keep feeding the number through the formula until you get a ranking.
    • This results in a rank matrix. . .
    1 2 3 6 4 5
  • Directed graph iterative node values
    • r 0 r 1 r 2 Rank(i2)
    • P 1 1/6 1/18 1/36 5
    • P 2 1/6 5/36 1/18 4
    • P 3 1/6 1/12 1/36 5
    • P 4 1/6 1/4 17/72 1
    • P 5 1/6 5/36 11/72 3
    • P 6 1/6 1/6 14/72 2
    1 2 3 6 4 5
  • CMS matrix This can’t go on forever Some values are equivalent (ties). In the interest of speed and efficiency, we need to know if the ranks converge—that is, will we break all ties, or will we keep doing this indefinitely and never have a decisive ranking? To determine this, the formula must be transformed using binary adjacency transformation, and Markov chain theory. 1 2 3 6 4 5
  • Convert the iterative calculation to a matrix calculation using binary adjacency transformation for a 1Xn matrix
    • P 1 P 2 P 3 P 4 P 5 P 6
    • P 1 0 ½ ½ 0 0 0
    • P 2 0 0 0 0 0 0
    • P 3 1/3 1/3 0 0 1/3 0
    • P 4 0 0 0 0 ½ ½
    • P 5 0 0 0 ½ 0 ½
    • P 6 0 0 0 1 0 0
    [ ]
  • Now, you can treat a row as a vector, or set of values P 1 P 2 P 3 P 4 P 5 P 6 P 1 0 ½ ½ 0 0 0 P 2 0 0 0 0 0 0 P 3 1/3 1/3 0 0 1/3 0 P 4 0 0 0 0 ½ ½ P 5 0 0 0 ½ 0 ½ P 6 0 0 0 1 0 0 [ ]   
  • This is a sparse matrix. That’s good. P 1 P 2 P 3 P 4 P 5 P 6 P 1 0 ½ ½ 0 0 0 P 2 0 0 0 0 0 0 P 3 1/3 1/3 0 0 1/3 0 P 4 0 0 0 0 ½ ½ P 5 0 0 0 ½ 0 ½ P 6 0 0 0 1 0 0 [ ]
  •  π  π   S + (1-  ) E)
    •  So now this:
    • Has become this: π  π   )
    • We only need a couple more adjustments.
    r (P j ) |P j | _____ P j  B Pi 
        • r(P i ) =
  •  π  π   S + (1-  ) E)
    • Sometimes, people teleport to a page. They just enter the URL and go. And just as easily, they can teleport out. To account for this, B&P added two adjustments:
    •  S accounts for people who reach a dead end and jump to another page within a site.  is a weighted probability that someone will leave.
    • S is a matrix of probable page destinations.
  •  π  π   S + (1-  ) E)
    • What about people who jump out to a completely new destination? To account for this, B&P added the final adjustments:
    • 1-  is the inverted weighted probability that someone will leave and go to a completely new site.
    • E is a random teleportation matrix of probable page destinations.