Xldb2011 wed 1415_andrew_lamb-buildingblocks
Upcoming SlideShare
Loading in...5
×
 

Xldb2011 wed 1415_andrew_lamb-buildingblocks

on

  • 496 views

 

Statistics

Views

Total Views
496
Views on SlideShare
496
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Xldb2011 wed 1415_andrew_lamb-buildingblocks Xldb2011 wed 1415_andrew_lamb-buildingblocks Presentation Transcript

    • Building Blocks for LargeAnalytic SystemsAndrew Lamb (alamb@vertica.com)Vertica Systems, an HP CompanyOct 19, 2011 VERTICA CONFIDENTIAL DO NOT DISTRIBUTE
    • Outline • Emergence of new hardware, software and data volume is driving the wave of new analytic systems in spite of mature existing systems. • Common architectural choices when building these new systems • Choices we made when building the Vertica Analytic Database • Broadly applicable (and widely used) 2
    • Architectural Changes • Driven by changing – Hardware: (really) cheap x86_64 servers, high capacity disk, high speed Better-ness networking – Software: Linux, Open Source – Requirements: Data Deluge • Hard for Legacy Software – Compatibility is brutal – Linux x86_64 today Time – Solaris, HPUX, IRIX, etc. 10 years ago• Storage Organization and Processing Principles 3
    • [Storage + Processing] MPP (no shared disk) SMP Server CP CP CP CP U U U U• Any modern system should run on a CP CP CP CP U U U U cluster of nodes, scale up/down System Bus• Mid range servers are really cheap Main Memory/Cache (to rent or own)• Aggregate available resources are enormous and scalable: Shared Disk – I/O Bandwidth (Disk and Network) – Cores, Memory, etc. NetworkMPP Servers Local Disk 4
    • [Storage] Keep Your Data Sorted• For large data sets, extra indexes are expensive to maintain• Much better to index the data itself by sorting it• Easy to find what you are looking for, reduces seeks 5
    • [Storage] Distribute Data by Value (not chunks)• “Sharding” -- Distribute data so you can easily find it again (not round robin)• Segmenting in analytics layer simplifies app layer• Round Robin computations wont scale: need to swizzle data around the cluster to do most useful thing• Replication for high availability: not logs B A C B A C B A C 2 2 2 1 1 1 3 3 3 A B C A B C A B C 3 3 3 2 2 2 1 1 1 6
    • [Storage] Store the Data in Columns• Rarely do all fields of a data set appear in analytic queries• Really dont want to waste I/O for data you dont need• Columns let you be clever about applying predicates individually, further reducing I/O• Not appropriate for row lookup or transaction processing systems 7
    • [Storage] Write it Once, Dont Modify• Physical use of disk objects should be write once (append only)• System should present the illusion of mutability• Immutable storage drastically simplifies coherence in a distributed system 8
    • [Processing] Use Large Sequential IOs • Spinning disks are very good at large sequential IOs • You really dont want to whipsaw the read head (another reason why secondary indexes are bad) • Especially useful with sorted data 40 Random vs Sequential Reads 35 30 25MB/s 20 15 10 Random Sequential 5 0 9
    • [Processing] Trade CPU for I/O Bandwidth• Use very aggressive compression, even if it seems/is wasteful (keep getting more cores)• Example: data type specific encoding, then LZO before actually writing to disk• Hide additional latency with execution pipeline parallelism 10
    • [Processing] Mess with the Data where it Lives• Bring your processing to the data, not data to the processing• System should push computation down close to data (even if calculation turns out to be redundant)• Example: multi-phase aggregation, each phase tries to aggregate intermediates before passing up the memory / node hierarchy 11
    • [Processing] Declarative & Extendable• Give users a declarative query language for most tasks• Writing procedural code for simple queries is wasteful• Provide procedural extensions for complex analysis 12
    • Conclusion• Emergence of new hardware, software and data volumes implies certain architectural choices in modern big data analytic systems.• Commonly observed in new systems• Vertica (unsurprisingly) features all of them 13
    • Introducing Community Edition• Free version of Vertica – help steward better analytics and the democratization of data- closed source, but open access!• All of the features and advanced analytics of Vertica Enterprise Edition• Seamless upgrade to Enterprise Edition• Limited to 1 TB raw data on 3-node hardware cluster• Revamping Community area of Vertica’s website for knowledge sharing, third party tools, and code sharing• Launching academic and non-profit research use program as well• Sign up at: www.vertica.com/community 14