Big Data Beyond Hadoop*: Research Directions for the Future


Published on

Michael Wrinn
Research Program Director, University Research Office,
Intel Corporation
Jason Dai
Engineering Director and Principal Engineer,
Intel Corporation

Published in: Technology, Education
1 Comment
1 Like
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Big Data Beyond Hadoop*: Research Directions for the Future

  1. 1. Big Data Beyond Hadoop*:Research Directions for the FutureJason Dai, Engineering Director and PrincipalEngineer, Software and Solutions GroupMichael Wrinn, PhD, Research Program Director,University Research Office, Intel Labs ACAS002
  2. 2. Agenda • Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production The PDF for this Session presentation is available from our Technical Session Catalog at the end of the day at: URL is on top of Session Agenda Pages in Pocket Guide2
  3. 3. Agenda • Big data and the Hadoop* ecosystem • Intel university collaborations on bit data research • Efficient in-memory implementations of mapreduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production3
  4. 4. What is Big Data? Big Data is data that is too big, too fast or too hard for existing systems and algorithms to handle • Too Big – Terabytes going on petabytes – Smart (not brute force) massive parallelism required • Too Fast – Sensor tagging everything creates a firehose – Ingest problem • Too Hard – Complex analytics are required (e.g., to find patterns, trends and relationships) – Need to combine diverse data types (No Schema, Uncurated, Inconsistent Syntax and Semantics) Data should be a resource, not a load Existing data processing tools are not a good fit4 Samuel Madden ISTC Director and Professor EECS, MIT
  5. 5. Example: Web Analytics Large web enterprises: thousands of servers, millions of users, and terabytes per day of “click data” Not just simple reporting: e.g., in real time, determine what users are likely to do next, or what ad to serve them, or which user they are most similar to Existing analytics systems either: do not scale to required volumes, or do not provide required sophistication5 Samuel Madden ISTC Director and Professor EECS, MIT
  6. 6. Example: Sensor Analytics Smartphone providers tolling agencies municipalities insurance companies doctors businesses Capturing massive streams of video, position, acceleration, and other data from phones and other devices This data needs to be stored, processed, and mined, e.g., to measure traffic, driving risk, or medical prognosis6 Samuel Madden ISTC Director and Professor EECS, MIT
  7. 7. Hadoop* in the Big Data Ecosystem Era of Data Exchange Cost-effective Vertical Solutions eCommerce Healthcare Manufacturing Energy - Scientific FSI Traditional Business Solutions New Analytics Models Business Processing Innovation In-Memory DB – Integrated Analytics –Systems & Appliances EXALYTICS Big Data Compute EP Platform Topology EX MIC Fabric Traditional Business Solutions Connecting to New Analytics Models for Real-time Value Opportunities7
  8. 8. Agenda • Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production8
  9. 9. Intel Activity Landscape on Big Data Apps Corporate Data Solution Programs for Big Data and Analytics Services Trust Broker Location Based Healthcare, Telco, … (McAfee*) Service (Telmap) Data Usage Visualization End user tools Big Data HiTune* and other tools for Hadoop Market Internet of Sizing and Things / M2M Analytics Segmentation (Intel Labs & (with Bain) university Distributed Machine Business Video Analytics collaborators) Learning Intelligence (university and Hadoop* Data Management collaborators) Distributed Video and Processing Analytics Hadoop Distribution & Service Hadoop performance & Distributed Data Delivery Computing Architecture Architecture (Guavus) and Storage Platform Microservers End-to-End Data Security Compression & Federated Device Large Object Architecture Decompression IPs Storage Intel Intel Intel Intel Others Architecture Software Labs IT9
  10. 10. Agenda • Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production10
  11. 11. Algorithms, Machines, People (AMPLab) Adaptive/Active Machine Learning and Analytics Massive and Diverse Data CrowdSourcing/ Human Cloud Computing Computation All software released as BSD Open Source11
  12. 12. Berkeley Data Analysis System • Mesos*: resource management platform • SCADS: scale-independent storage systems • PIQL, Spark: processing frameworks SharkQuery Languages * … Higher Hive* Pig / PIQL … Processing Frameworks * MPI … Spark Hadoop Mesos Resource Management SCADS Storage HDFS 3rd party AMPLab12
  13. 13. Data Center Programming: Spark • In-memory cluster computing framework for applications that reuse working sets of data – Iterative algorithms: machine learning, graph processing, optimization – Interactive data mining: order of magnitude faster than disk-based tools • Key idea: RDDs “resilient distributed datasets” that can automatically be rebuilt on failure – Keep large working sets in memory – Fault tolerance mechanism based on “lineage”13
  14. 14. Spark: Motivation Complex jobs, interactive queries and online processing all need one thing that Hadoop* MR lacks: • Efficient primitives for data sharing Query 1 Stage 1 Stage 2 Stage 3 Job 1 Job 2 Query 2 … Query 3 Iterative job Interactive mining Stream processing14
  15. 15. Xfer and Sharing in Hadoop* HDFS HDFS HDFS HDFS read write read write Iter. 1 Iter. 2 . . . Input HDFS Query 1 Result 1 read Query 2 Result 2 Query 3 Result 3 Input . . .15
  16. 16. Spark: In-Memory Data Sharing Iter. 1 Iter. 2 . . . Input Query 1 One-time processing Query 2 Query 3 Input Distributed memory . . .16
  17. 17. Introducing Shark • Spark + Hive* (the SQL in NoSQL) • Utilizes Spark’s in-memory RDD caching and flexible language capabilities: result reuse, and low latency • Scalable, fault-tolerant, fast • Query Compatible with Hive17
  18. 18. Benchmarks: Query 1 30GB input table SELECT * FROM grep WHERE field LIKE ‘%XYZ%’;18
  19. 19. Benchmark: Query 2 5 GB input table SELECT pagerank, pageURL FROM rankings WHERE pagerank > 10; *19
  20. 20. Agenda • Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production20
  21. 21. Data Parallelism (MapReduce) 2 1 8 6 4 1 2 8 3 2 7 4 2 4 8 1 4 CPU 1 CPU 2 CPU 3 CPU 4 7 4 2 5 . . . . . . 1 5 9 5 3 4 9 3 4 3 8 Solve a huge number of independent subproblems21
  22. 22. MapReduce for Data-Parallel ML • Excellent for large data-parallel tasks! Data-Parallel Graph-Parallel MapReduce Is there more to Feature Cross Machine Extraction Validation Computing Sufficient Learning ? Statistics22
  23. 23. Machine Learning Pipeline Extract Graph Features Formation Structured Data Value Machine from Learning Data Algorithm similar face faces belief images faces labels propagation docs important doc shared LDA words topics movie words side collaborative ratings rated movie info filtering movies recommend23
  24. 24. Parallelizing Machine Learning Extract Graph Features Formation Structured Data Value Machine from Learning Data Algorithm Graph Ingress Graph-Structured mostly data-parallel Computation graph-parallel24
  25. 25. Addressing Graph-Parallel ML Data-Parallel Graph-Parallel Map Reduce Graph-Parallel Abstraction Map Reduce? Feature Cross Graphical Semi-Supervised Extraction Validation Models Learning Gibbs Sampling Label Propagation Computing Sufficient Belief Propagation CoEM Statistics Variational Opt. Collaborative Data-Mining Filtering PageRank Triangle Counting Tensor Factorization25
  26. 26. Example: Never Ending Learner Project (CoEM) 16 Better 14 Optimal Hadoop* 12 95 Cores 7.5 hrs 10 GraphLab 16 Cores 30 min Speedup 8 GraphLab CoEM Distributed 6x fewer CPUs! 32 EC2 80 secs 6 15x Faster! GraphLab 4 machines 2 0 0.3% of Hadoop time 0 2 4 6 8 10 12 14 16 Number of CPUs26
  27. 27. Example: PageRank 5.5 hrs. Hadoop* 1 hr. Twister* 8 min. GraphLab 40M Webpages, 1.4 Billion Links27
  28. 28. Agenda • Big data and the Hadoop* ecosystem • Intel university collaborations on big data research • Efficient in-memory implementations of map reduce • Efficient graph algorithms for analytics • Intel’s efforts moving research to production28
  29. 29. Intel’s Efforts on Hadoop* • Intel® Distribution for Apache Hadoop* – Performance, security and management – Downloadable from • Intel’s open source initiatives for Hadoop – HiBench: comprehensive Hadoop benchmark suite  – Project Panthera: efficient support of standard SQL features on Hadoop  – Project Rhino: enhanced data protection for the Apache Hadoop ecosystem  – Graph Builder: scalable graph construction using Hadoop 
  30. 30. Using Spark/Shark for In-memory, Real-time Data Analysis • Use case 1: ad-hoc & interactive queries – Interactive queries (exploratory ad-hoc queries, BI charting & mining) – Similar projects: Google* Dremel, Facebook* Peregrine, Cloudera* Impala, Apache* Drill, etc. (several seconds latency) – Use Shark/Spark to achieve close to sub-second latency for interactive queries • Use case 2: in-memory, real-time analysis – Iterative data mining, online analysis (e.g., loading table into memory for online analysis, caching intermediate results for iterative machine learning) – Similar projects: Google PowerDrill – Use Shark/Spark to reliably load data in distributed memory for online analysis30
  31. 31. Using Spark/Shark for In-memory, Real-time Data Analysis • Use case 3: stream processing – Streaming analysis, CEP (e.g., intrusion detection, real-time statistics, etc.) – Similar projects: Twitter* Storm, Apache* S4, Facebook* Puma – Use Spark streaming for stream processing  Better reliability  Unified framework and application for both offline, online & streaming analysis • Use case 4: graph-parallel analysis & machine learning – Use case: graph algorithms, machine learning (e.g., social network analysis, recommendation engine) – Similar projects: Google* Pregel, CMU GraphLab* – Use Bagel (Pregel on Spark) for graph parallel analysis & machine learning on Spark31
  32. 32. Summary • MapReduce as implemented in Hadoop* is extremely useful, but: – In-memory implementations show serious advantages – Graph algorithms may be more suitable for problem at hand • Intel continues to work with university researchers • Intel works to implement research results into production environments32
  33. 33. Call to Action • Use Intel Research results in your own big data efforts! • Work with us on the next-gen, in-memory, real-time analysis using Spark/Shark33
  34. 34. Legal DisclaimerINFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT ASPROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVERAND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDINGLIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANYPATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.• A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTELS PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.• Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.• The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.• Intel product plans in this presentation do not constitute Intel plan of record product roadmaps. Please contact your Intel representative to obtain Intels current plan of record product roadmaps.• Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to:• Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.• Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to:• Intel, Sponsors of Tomorrow and the Intel logo are trademarks of Intel Corporation in the United States and other countries.• *Other names and brands may be claimed as the property of others.• Copyright ©2013 Intel Corporation.34
  35. 35. Legal Disclaimer Intels compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #2011080435
  36. 36. Risk Factors The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should” and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand could be different from Intels expectations due to factors including changes in business and economic conditions; customer acceptance of Intel’s and competitors’ products; supply constraints and other disruptions affecting customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial conditions poses a risk that consumers and businesses may defer purchases in response to negative financial events, which could negatively affect product demand and other related matters. Intel operates in intensely competitive industries that are characterized by a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and market acceptance of Intels products; actions taken by Intels competitors, including product offerings and introductions, marketing programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological developments and to incorporate new features into its products. The gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and intangible assets. Intels results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Expenses, particularly certain marketing and compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intels products and the level of revenue and profits. Intel’s results could be affected by the timing of closing of acquisitions and divestitures. Intel’s current chief executive officer plans to retire in May 2013 and the Board of Directors is working to choose a successor. The succession and transition process may have a direct and/or indirect effect on the business and operations of the company. In connection with the appointment of the new CEO, the company will seek to retain our executive management team (some of whom are being considered for the CEO position), and keep employees focused on achieving the company’s strategic goals and objectives. Intels results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues, such as the litigation and regulatory matters described in Intels SEC reports. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent Form 10-Q, report on Form 10-K and earnings release. Rev. 1/17/1336