Your SlideShare is downloading. ×
Hadoop World Vertica
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Hadoop World Vertica

5,776
views

Published on

Hadoop World 2009 Presentation on the Vertica Hadoop Connector

Hadoop World 2009 Presentation on the Vertica Hadoop Connector

Published in: Technology, Travel

0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
5,776
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
190
Comments
0
Likes
4
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Vertica Integration with Apache Hadoop Hadoop World NYC 2009 HDFS Hadoop Compute Cluster Map Map Map Reduce
  • 2. Vertica ® Analytic Database
    • MPP columnar architecture
    • Second to sub-second queries
    • 300GB/node load times
    • Scales to hundreds of TBs
    • Standard ETL & Reporting Tools
    www.vertica.com
  • 3. What do people do with Hadoop?
    • Transform data
    • Archive data
    • Look for Patterns
    • Parse Logs
  • 4. Big Data comes in Three Forms
    • Unstructured
      • Images, sound, video
    • Semi-structured
      • Logs, data feeds, event streams
    • Fully Structured
      • Relational tables
  • 5. Availability, Scalability and Efficiency
    • … how fast can you go from data to answers?
    • Unstructured data needs to be analyzed to make sense.
    • Semi-structure data parsed based on spec (or brute force).
    • Structured data can be optimized for ad-hoc analysis.
  • 6. Hadoop / Vertica
    • Distributed processing framework (MapReduce)
    • Distributed storage layer (HDFS)
    • Vertica can be used as a data source and target for MapReduce
    • Data can also be moved between Vertica and HDFS (sqoop)
    • Hadoop talks to Vertica via custom Input and Output Formatters
  • 7. Hadoop / Vertica Vertica serves as a structured data repository for hadoop Hadoop Compute Cluster Map Map Map Reduce
  • 8. Hadoop / Vertica
    • Vertica’s input formatter takes a parameterized query
    • Relational Map operations can be pushed down to the database
    • Vertica’s output formatter takes an existing table name or a description
    • Vertica output tables can be optimized directly from hadoop
  • 9. Hadoop / Vertica Federate multiple Vertica database clusters with hadoop Hadoop Compute Cluster Map Map Map Reduce Hadoop Compute Cluster Map Map Map Reduce Hadoop Compute Cluster Map Map Map Reduce Hadoop Compute Cluster Map Map Map Reduce
  • 10. What is the Interface?
    • Input Formatter
      • Query specifies which data to read
      • Query can be parameterizes (map push down)
      • Each input split gets one parameter
      • OR, input can be spliced with order by and limit (slower)
    • Output Formatter
      • Job specifies format for output table
      • Vertica converts reduced output into trickle loads
      • Vertica can optimize new tables
  • 11. Some Hadoop / Vertica Applications
    • Elastic Map Reduce parsing and loading CloudFront Logs
    • Tickstore algorithm with map push down
    • Analyze time series
    • Sessionize click streams
    • Parse and load logs
  • 12. Basic Example
    • Elastic Map Reduce parsing and loading CloudFront Logs
    • Mapper reads from S3 CloudFront Logs
    • Parses into records, transmits to reducer
    • Reducer loads into Vertica
    • All done with streaming API
    ~ 10 lines of python Limitless SQL
  • 13. Advanced Example
    • Tickstore algorithm with map push down
    • Input formatter queries Vertica using map push down
    • Identity Mapper passes through to reducer
    • Reducer runs proprietary algorithm
      • moving average, correlations, secret sauce
    • Results are stored in a new table for further analysis
    • Vertica optimizes the new table
  • 14. How to get started
    • Get a copy of hadoop from Apache or Cloudera
    • Get vertica from www.vertica.com or via Amazon or RightScale or as a VM
    • Grab the formatter and Vertica jdbc drivers from vetica.com/MapReduce
    • Included in contrib from hadoop 0.21.0 (MR-775)
    • Put the jars in hadoop/lib
    • Run your Hadoop/Vertica job
  • 15. Future Directions and Questions
    • Archiving information lifecycle (sqoop)
    • Invoking hadoop jobs from Vertica
    • Joining Vertica data mid job
    • Using Vertica for (structured) transient job data
    • [email_address]
    • Vertica.com/MapReduce