Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Local sensitive hashing & minhash on facebook friend

664 views

Published on

The Minhash is implemented by Hadoop to provide high Jaccard similarities, which is used to make friends recommendation on Facebook friend link New Orleans data through a way of collaborative filtering.
Written by Chengeng Ma

Published in: Data & Analytics
  • D0WNL0AD FULL ▶ ▶ ▶ ▶ http://1lite.top/gD0G3 ◀ ◀ ◀ ◀
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Get Paid On Social Media Sites? YES! View 1000s of companies hiring social media managers now! ▲▲▲ http://t.cn/AieX6y8B
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • How to start a wildly profitable 7 figure marketing business and get your first commission check tonight, click here ♣♣♣ http://ishbv.com/j1r2c/pdf
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Tired of being scammed? Take advantage of a program that, actually makes you money! ●●● http://scamcb.com/ezpayjobs/pdf
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Be the first to like this

Local sensitive hashing & minhash on facebook friend

  1. 1. Local Sensitive Hashing & Minhash on Facebook friend links data & friends recommendation Chengeng Ma Stony Brook University 2016/03/05
  2. 2. 1. What is Local Sensitive Hash & Minhash? • If you are familiar with LSH and Minhash, please directly go to page 12, because the following pages are just fundamental knowledge about this topic, which you can find more details in the book, Mining of Massive Dataset, written by Jure Leskovec, Anand Rajaraman and Jeffrey D. Ullman.
  3. 3. What is LSH & Minhash about? • Local Sensitive Hash (LSH) & Minhash are two profoundly important methods in Big Datafor finding similar items. • In Amazon, if you can find two similar persons, you can recommend to one person the items the other has purchased. • For Google, Baidu, …, users always hope the search engine can find pictures similar to the one they have uploaded.
  4. 4. Calculating similarity between each pair is a lot of computation (Why LSH?) • If you have 106 items within your data, you will need almost 0.5 × 1012 times computation to know the similarities between each pair. • You will need to parallel a lot of tasks to deal with this huge computation amount. • You can do this with the help of Hadoop, but you can do better with the help of LSH & Minhash. • The LSH can hash one item to a bucket based on the feature list that item has. • If two items are quite similar with each other in their feature lists, then they will have a large probability to be hashed into the same bucket. • You can amplify this effect to different extent by setting parameters. • Finally, you only need to compute similarities for the pairs formed by the items within the same bucket.
  5. 5. How Minhash comes in? • The LSH needs you keep the feature list of each item in the format like a matrix (the sequence is important). • If the size of universal set is fixed or small, e.g., the fingerprint array, then LSH alone can work well. 1st column represents the items person S1 has purchased. 1st row represents who has purchased item a.
  6. 6. How Minhash comes in? • Jaccard Similarity = 2/7 • However, if the universal set is large or not size-fixed, e.g., items purchased by each account, friend list on social network, … • Then formatting the dataset into matrix is not efficient, since the dataset is usually very sparse. • Then Minhash works, if the similarities between two feature lists is calculated as Jaccard similarities.
  7. 7. What’s Minhash value? • Permute the original matrix by row. • For each column (set), the 1st non-empty element’s row index is the minhash value of that column. Original matrix Permute to a different order: b, e, a, d, c. H(S1)=a, H(S2)=c, H(S3)=b, H(S4)=a.
  8. 8. Minhash’s property (similarity preserved): • 3 kinds of rows between set 𝑆 𝑎 & 𝑆 𝑏: (x): both sets have 1; (y): one has 1, the other has 0; (z): both sets have 0. 𝐽 𝑎𝑏 = |𝑋| 𝑋 + |𝑌| Pr ℎ 𝑆 𝑎 = ℎ 𝑆 𝑏 = |𝑋| 𝑋 + |𝑌| • If you do 100 times different minhash, you reduce one dimension of the matrix from unknown large to 100. • The probability that two sets share the same minhash value equals the Jaccard similarity between them. Pr ℎ 𝑆 𝑎 = ℎ 𝑆 𝑏 = 𝐽 𝑎𝑏
  9. 9. Permutations can be simulated by hash functions • For j th column in original matrix, find all the non-empty elements, try to input their indexes into the i th hash function, the minimum output is the element SIG(i, j). • Hash function: 𝑎 ∗ 𝑥 + 𝑏 % 𝑁 : • where N is a prime, equal to or slightly larger than the size of universal set (# of rows of original matrix), • a & b must be integers within [1, N-1]. • The result signature matrix, where row index is for hash functions, column index is for sets. For example, we use 2 hash functions to simulate 2 permutations: (x+1)%5 and (3x+1)%5, where x is row index SIG
  10. 10. Now you have signature matrix, you use it instead of original matrix to do LSH. • Divide the signature matrix into b bands, each of which has r rows. • For each band range, build an empty hashtable, hash each column (portion within the band range) into a bucket, so that only identical bands are hashed into the same bucket. • Columns within the same bucket are considered candidates that you should form pairs and calculate similarities. • Take the union of different band ranges and filter out the false positives.
  11. 11. Jaccard Similarity Probability of becoming a candidate Why LSH works? --- the amplification effects
  12. 12. 2. Details of my class project: dataset • User-to-user links from Facebook New Orleans networks data. • The data is created by Bimal Viswanath et al. and used for their paper On the Evolution of User Interactionin Facebook. • It can be download in http://socialnetworks.mpi- sws.org/data/facebook-links.txt.gz • It has 63,731 persons and 1,545,686 links, 10.4 MB in size. • The data is not large, but as a training, I will use Hadoop during this project.
  13. 13. My class project plan: • Firstly find similar persons based on users’ friend lists, where LSH and Minhash will be implemented in Hadoop. • The similar persons are called “close friend” in this project. • Then recommend to you the persons who are friends of your close friend but not yet of you. • It generally sounds like Collaborative Filtering. • Two persons who have similar friend list are considered “close friends”, since they must have some relationship in the real world, e.g., schoolmates, workmates, teammate, … • If you’re good friend of someone, you may like to know more of his/her friend. • We do not set too high threshold for similarity, since finding a duplicate of you is not interesting.
  14. 14. Why not just use common friend counts? • The classicalway is based on number of common friends. • However, there are some persons who have a lot of common friends with you, but has nothing to do with you, e.g., celebrities, politics, salesmen who want to sell their stuff through social network, or even swindlers… • People use social network to find friends that can physically reach them, but not for persons too far away from them. • Most of my friends may like a pop singer and become friends of him. Based on common friends, system will recommend that pop singer to me. • But the pop singer can never remember me, since he has millions of friends on site.
  15. 15. Prepare work: • 1. Make data into the format like below, where j represents the j th person, Pj is a list of friends of person j. • 2. In this study, 63731 is both the number of sets to compare and the size of universal element set. Because both the key j and the elements within set Pj are user id. 1: P1 2: P2 … … n: Pn • 3. 63731 is not a prime (101*631). Only prime number can simulate true permutations. We use 63737 instead, equivalent to adding 6 persons that has no friends online. • 4. Hash function for Minhash: N=63737L; hashNum=100; private long fhash ( int i, long x ) { return 13 + 𝑥 − 1 ∗ ( 𝑁∗𝑖 3∗ℎ𝑎𝑠ℎ𝑁𝑢𝑚 + 1) %𝑁 ; } 1 ≤ 𝑥 ≤ 𝑁, 0 ≤ 𝑖 ≤ ℎ𝑎𝑠ℎ𝑁𝑢𝑚 − 1
  16. 16. Pseudocode of Minhash (Map job only) • Mapper input: (c, Pc), where Pc represents a list [ j1, j2, …, js ]; • Build a new array s[hashNum] (hashNum=100here), initialized as infinity everywhere. • For i th hash function, each element jj in Pc is an opportunity to get lower hash value, finally the minimum hash value from all jj is the minhash in SIG[i,c]. • Output c as key, the content of array s as value. input (c, Pc), where Pc= [ j1, j2, …, js ] long[] s = new long[hashNum]; for 0 ≤ 𝑖𝑖 ≤ ℎ𝑎𝑠ℎ𝑁𝑢𝑚 − 1: s[ii] = infinity; end for jj in [ j1, j2, …, js ]: for 0 ≤ 𝑖𝑖 ≤ ℎ𝑎𝑠ℎ𝑁𝑢𝑚 − 1: s[ii] = min (s[ii], fhash(ii, jj)); end End Output (c, array s);
  17. 17. Pseudocode for LSH: • Mapper input (j, 𝑆𝑗), where 𝑆𝑗 is the j th column of signature matrix. • Split array 𝑆𝑗 into B bands as, 𝑆𝑗1, 𝑆𝑗2, …, 𝑆𝑗𝐵. • For b th band, get its hash value stored in myHash. • Output the tuple (b, myHash) as key, j as value. for 1 ≤ 𝑏 ≤ 𝐵: myHash = getHashValue(𝑆𝑗𝑏) Output { ( b, myHash ), j } end • Reducer input: { (b, aHashValue), [𝑗1, 𝑗2, …, 𝑗 𝑝] } Now, form pairs between 𝑗1, …, 𝑗 𝑝and output as candidate pairs. for 1 ≤ 𝑥 ≤ 𝑝 − 1: for 𝑥 + 1 ≤ 𝑦 ≤ 𝑝: output (𝑗 𝑥, 𝑗 𝑦) end end One more program is needed to remove duplicates. Hadoop’ssorting procedure helps us gathering all the items that both has the same hash value and comes from the same band range.
  18. 18. Hash function for LSH: • The LSH needs to hash a band portion of vector into a value. • It hopes only identical vectors can be hashed into the same bucket. • An easy way is to directly use its string expression, since Hadoop also uses Text to transport data. • For example, hash the below portion into string: • In this way, only exactly same vector portion can comes into the same bucket. “21,14,36,55” Hash to string
  19. 19. Parameters set: • We do not want to set threshold of similarity too high, since finding a duplicate of you on web is not interesting. • So we set the threshold of similarity near 0.1. • We set B=50 and hashNum=100, so that each band in LSH has R=2 rows. 𝑃 𝑟𝑒𝑐𝑜𝑚𝑚𝑒𝑛𝑑 𝑥) = 1 − (1 − 𝑥 𝑅 ) 𝐵 • S curve grows quickly: • X=0.1, P=0.39 • X=0.15, P=0.68 • X=0.2, P=0.87 B=50, R=2
  20. 20. Result Test: • 𝑃 𝑟𝑒𝑐𝑜𝑚𝑚𝑒𝑛𝑑 𝑥) = 𝑃(𝑟𝑒𝑐𝑜𝑚𝑚𝑒𝑛𝑑 & 𝑥≤𝑠<𝑥+𝑑𝑥) 𝑃(𝑥≤𝑠<𝑥+𝑑𝑥) • 𝑃 𝑟𝑒𝑐𝑜𝑚𝑚𝑒𝑛𝑑 𝑥) = 1 − (1 − 𝑥 𝑅) 𝐵 • The Hadoop output can be analyzed to get 𝑃(𝑟𝑒𝑐𝑜𝑚𝑚𝑒𝑛𝑑 & 𝑥 ≤ 𝑠 < 𝑥 + 𝑑𝑥). • For 𝑃(𝑥 ≤ 𝑠 < 𝑥 + 𝑑𝑥), another Hadoop program is written to really calculatethe similarities within all possible pairs (which takes N(N-1)/2times computation),since the dataset is not too large. • But only the similarities equal to or larger than 0.1 is stored in output file, because it takes several Terabytes to store all the similarities.
  21. 21. 𝑃 𝑟𝑒𝑐𝑜𝑚𝑚𝑒𝑛𝑑 𝑥) derived from real dataset (blue) and the theoretical curve (red)
  22. 22. Histogram of LSH recommended pairs and all existing pairs (but cut at 0.1) within the data
  23. 23. Statistics • The LSH & Minhash recommends 1,065,318 pairs. • There are 660,334 existing pairs that really have s larger than 0.1. • Intersection of them have 429,176 pairs, which contains 65% of similar pairs (s>0.1). • But the computation is hundreds of times faster than before. • 1 − (1 − 𝑥 𝑅 ) 𝐵 ) = 𝑃(𝑟𝑒𝑐𝑜𝑚𝑚𝑒𝑛𝑑 & 𝑥≤𝑠<𝑥+𝑑𝑥) 𝑃(𝑥≤𝑠<𝑥+𝑑𝑥) • We define a reference value 𝑥 𝑅𝑒𝑓: • 1 − (1 − 𝑥 𝑅𝑒𝑓 𝑅 ) 𝐵 )= 0.1 1 𝑃 𝑟𝑒𝑐𝑜𝑚𝑚𝑒𝑛𝑑 & 𝑥 ≤ 𝑠 < 𝑥 + 𝑑𝑥 𝑑𝑥 0.1 1 𝑃(𝑥 ≤ 𝑠 < 𝑥 + 𝑑𝑥) 𝑑𝑥 = 429176/660334 = 0.649938 Take in parameters B=50, R=2 𝑥 𝑅𝑒𝑓 = 0.1441 , which is slightly above 0.1
  24. 24. How to calculate 𝑃(𝑥 ≤ 𝑠 < 𝑥 + 𝑑𝑥)? (Similarity Joins Problem) • To get the exact P.D.F., you need to really calculate the similarities for all N(N-1)/2 pairs. • Using Hadoop can parallel and speed up. But don’t use too high replicate rate. • How about the right hand side method? Mapper input: (i, Pi) for 1 ≤ j ≤N: if i < j: output {(i, j),Pi} else if i > j: output {(j, i), Pi} end Reducer input: { (i, j), [Pi, Pj] } Output { (i, j), Sij } • This method takes replicate rate as N and will definitely fail. The correct way is to split persons into G groups.
  25. 25. The correct way to get similarities between all pairs by Hadoop. • Mapper input: (i, Pi) • Determine its group number as u=i%G, where G is the number of groups you split, it is also the replicate rate. • For 0 ≤ v ≤G-1: • If u < v: Output { (u, v), (i, Pi) } • Else if u>v: Output { (v, u), (i, Pi) } • end • Reducer input: • { (u, v), [∀ 𝑖, 𝑃𝑖 ∈ 𝐺𝑟𝑜𝑢𝑝 𝑢, ∀ 𝑗, 𝑃𝑗 ∈ 𝐺𝑟𝑜𝑢𝑝 𝑣] } • Create two empty list uList & vList, separately to gather all 𝑖, 𝑃𝑖 that belongs to group u and v. For 0 ≤ 𝛼 ≤ 𝑠𝑖𝑧𝑒 𝑢𝐿𝑖𝑠𝑡 − 1: Get i and Pi from 𝑢𝐿𝑖𝑠𝑡[𝛼] For 0 ≤ 𝛽 ≤ 𝑠𝑖𝑧𝑒 𝑣𝐿𝑖𝑠𝑡 − 1: Get j and Pj from 𝑣𝐿𝑖𝑠𝑡[𝛽] If i<j: output { (i, j), Sij } Else if i>j: output { (j, i), Sij } v Continued in the next page
  26. 26. Still within the reducer: • The above only consider pairs whose element comes from different groups. • Now we consider elements within the same group. • We manage to avoid calculate the same pairs multiple times by setting if conditions. If v==u+1: For 0 ≤ 𝛼 ≤ 𝑠𝑖𝑧𝑒 𝑢𝐿𝑖𝑠𝑡 − 2: Get i and Pi from 𝑢𝐿𝑖𝑠𝑡[𝛼] For 𝛼 + 1 ≤ 𝛽 ≤ 𝑠𝑖𝑧𝑒 𝑢𝐿𝑖𝑠𝑡 − 1: Get j and Pj from 𝑢𝐿𝑖𝑠𝑡[𝛽] If i<j: output { (i, j), Sij } Else if i>j: output { (j, i), Sij } If u==0 & v==G-1: For 0 ≤ 𝛼 ≤ 𝑠𝑖𝑧𝑒 𝑣𝐿𝑖𝑠𝑡 − 2: Get i and Pi from 𝑣𝐿𝑖𝑠𝑡[𝛼] For 𝛼 + 1 ≤ 𝛽 ≤ 𝑠𝑖𝑧𝑒 𝑣𝐿𝑖𝑠𝑡 − 1: Get j and Pj from 𝑣𝐿𝑖𝑠𝑡[𝛽] If i<j: output { (i, j), Sij } Else if i>j: output { (j, i), Sij }
  27. 27. Post processing work: • 1. Filter out the false positives by calculate similarities for those candidate pairs. Then we will have the similar persons (“closefriend”) for a lot of users. • General ides is using 2 MR jobs: • 1st MR job use i as key and change (i, j) to (i, j, Pi); • 2nd MR job change (i, j, Pi) to (i, j, Pi, Pj), so you can get similarity Sij. • 2. Recommendation: for each user, take the union of his/her close friends’friend list and filter out the members he/she already knows. • General idea is: • When you have a similar person list like {a, [b1, b2, … ,bs]}, then you transfer it to {a, [Pb1, Pb2, …, Pbs]}, where Pbi is the friend list of person bi. • Then take the union of Pbi and finally minus Pa.
  28. 28. Filter Out False Positives (2 MR jobs) • 1st Mapper (multiple inputs): Recommendation data: (i, j) { i, (j, “R”) } if i<j { j, (i, “R”) } if i>j Friend List data: (i, Pi) {i, (Pi, “F”) } • 1st Reducer: input: { i, [ (j, “R”) ∀j candidate paired with i & j>i; (Pi, “F”) ] } For each j from input: Output {j, (i, Pi, ”temp”)} • 2nd Mapper (multiple inputs): Temporary data: Pass Friend List data: (j, Pj) {j, (Pj, “F”) } • 2nd Reducer: input: { j, [ (i, Pi, “temp”) ∀𝑖 associated with j; (Pj, “F”) ] } For each i: Sij=similarity(Pi, Pj) If Sij>=0.1: output {(i, j), Sij}
  29. 29. Recommendation (3 MR jobs): • 1st Mapper (multiple inputs): • Similar persons list data: { a, [b1,b2,…,bs]} {bi, (a, “S”)} for all i • Friend List data: (bi, Pbi) {bi, (Pbi, “F”) } • 1st Reducer: input: {bi, [ (a, ”S”) ∀a similar to bi; (Pbi, ”F”) ]} For each a from input: Output {a, Pbi} • 2nd Mapper: Pass • 2nd Reducer: input: { a, [Pb1, Pb2, …, Pbs] } U = Pb1 ∪ Pb2 ∪ … ∪ Pbs Output {a, U} • 3rd Mapper (multiple inputs): (i, Ui) {i, (Ui, “u”)} (i, Pi) {i, (Pi, “F”)] • 3rd Reducer: input: {i, [(Ui, “u”),(Pi, “F”)] } Output {i, Ui-Pi}
  30. 30. Reference: • 1. Mining of Massive Dataset, Jure Leskovec, Anand Rajaraman and Jeffrey D. Ullman. • 2. Bimal Viswanath, Alan Mislove, Meeyoung Cha, and Krishna P. Gummadi. 2009. On the evolution of user interaction in Facebook. In Proceedings of the 2nd ACM workshop on Online social networks (WOSN '09). ACM, New York, NY, USA, 37-42. DOI=http://dx.doi.org/10.1145/1592665.1592675

×