Hadoop and other big data tools such as Voldemort, Azkaban, and Kafka, drive many data driven products at LinkedIn such as “People You MayKnow” and various recommendation products such as “Jobs You May Be Interested In”. Each of these products can be viewed as a large scale social recommendation problems, which analyzes billions of possible options, and suggest appropriate recommendation.
Since these products analyzes billions of edges and terabytes of data daily, it can be built only using a large scale distributed compute infrastructure. Kafka publish-subscribe messaging system is used to get the data in Hadoop file system. Hadoop MapReduce is used as the basic building block to analyze billions of potential options, and predict recommendation. Over a hundred MapReduce tasks are combined together in a work-flow uising Azkaban, a Hadoop work-flow management tool. The output of Hadoop jobs is finally stored in Voldemort key-value store to serve the data at run-time for efficiency.
During this talk audience will get a basic understanding of link prediction problem behind “ People You May Know” feature, which is a large scale social recommendation problem. Overview of the solution of this problem using Hadoop MapReduce, Azkaban workflow management tool, and Voldemort key-value store will be presented. I will also describe how to efficiently compute the number of common connections (triangle closing) using Hadoop Mapreduce, which is one of the many signals in link prediction.
Overall, people interested in building interesting applications using Hadoop MapReduce will hugely benefit from this talk.