• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
slide deck
 

slide deck

on

  • 583 views

 

Statistics

Views

Total Views
583
Views on SlideShare
583
Embed Views
0

Actions

Likes
0
Downloads
11
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Slides 3-26 outline what you need to know about the sector to conduct any kind of selection process. They‘re not meant to be read separately, but rather just to illustrate the presentation. Slides 27-39 have tips for the process itself. They’re meant as reference take-aways. We’ll discuss them selectively as time permits.
  • Disk speed dominates everything. The problem is this – disks simply don’t spin very fast. If they did, they’d fly off of the spindle or something. The very first disk drives, introduced in 1956 by IBM, rotated 1200 times per minute. Today’s top-end drives only spin 15000 times per minute. That’s a 12.5 fold increase in 40 years. Most other metrics of computer performance increase 12.5 fold every 7 years or so. That’s just Moore’s Law. A two-year doubling, which turns out to be more factual than other statements of the law, works out to an 8-fold increase in 6 years, or a 12-fold increase in 7. There’s just a huge, huge difference.
  • It’s actually hard to get a single firm number for the difference between disk and RAM access times. Disk access times are well-known. They’re advertised a lot, for one thing. But RAM access times are harder. A big part of the problem is that they depend heavily on architecture; access isn’t access isn’t access. There are multiple levels of cache, for example. Another problem is that RAM isn’t RAM isn’t RAM. Anyhow, listed access times tend to be in the 5 to 7-and-a-half nanosecond range, so that’s what I’m going with. One thing we can compute is a very hard lower bound on disk random seek times. If a seek is random, than the average time is at least the time it takes the disk to spin physically around. And we know exactly what that is; it’s 2 milliseconds. There’s just no way random disk seeks will get any faster than that, except to the extent disk rotation resumes its creeping slow progress. “ Tiering” basically means “Use of Level 2 – i.e., on-processor – cache”
  • I’ve been watching the DBMS industry – especially the relational vendors – work on performance for over 25 years now. And I’m I awe at what they’ve accomplished. It’s some of the finest engineering in the software industry. With OLTP performance largely a solved problem, most of that work for the past decade has been in the area of OLAP. And improving OLAP performance basically means decreasing OLAP I/O. Perhaps the most basic thing they try to do is minimize the amount of data returned. Since the end result is what the end result is, this means optimizing the amount returned at intermediate stages of a query execution process. That’s what cost-based optimizers are all about … Baked into the architecture of disk-centric DBMS is something even more basic; they try to minimize index accesses. Naively, if you’re selecting from a 2^30 th – i.e., 1 billion -- records, there might be 30 steps as you walk through the binary tree. By dividing indices into large pages, this is reduced – at the cost of a whole lot of sorting within the block at each step. Layered on are ever more special indexing structures. For example, if it seems clear that a certain join will be done frequently, an index can be built that essentially bakes in that join’s results. Of course, this also reduces the amount of data returned in the intermediate step, admittedly at the cost of index size. Anyhow, it’s a very important technique. And that’s not the only kind of precalculation. Preaggregation is at the heart of disk-centric MOLAP architectures. Materialized views bring MOLAP benefits to conventional relational processing. These are all more or less logical techniques, although the optimizer stuff is on the boundary between logical and physical. There also are approaches that are more purely physical. Most basically, much like the index situation,data is returned in pages. It turns out to be cheaper to always be wasteful and send a whole block of sequential data back than it is to send back only what is actually needed. Beyond that, efforts are made to understand what data will be requested together, and cluster it so that sequential reads can take the place of truly random I/O. And that leads to the most powerful solution of all – do everything in RAM!! If you always initialized by reading in the whole database, in principle you’re done with ALL your disk I/O for the day! Oh, there may be reasons to write things, such as the results to queries, but basically you’ve made your disk speed problems totally to away. There’s a price of course, mainly and most obviously in the RAM you need to buy, and probably the CPU driving that RAM. But by investing in one area, you’re making a big related problem go away – if, of course, you can afford all that silicon.
  • This is the model for appliances. It’s also the model for software-only configurations that compete with appliances. Think IBM BCUs = Balanced Configuration Units, or various Oracle reference configurations. The pendulum shifts back and forth as to whether there are tight “recommended configurations” for non-appliance offerings. Row-based vendors are generally pickier about their hardware configurations than columnar ones.
  • Kickfire is the only custom-chip-based vendor of note. Netezza’s FPGAs and PowerPC processors aren’t, technically, custom. But they’re definitely unusual. Oracle and DATAllegro (pre-Microsoft) like Infiniband. Other vendors like 10-gigabit Ethernet. Others just use lots and lots of 1-gigabit switches. Teradata, long proprietary, is now going in a couple of different networking directions.
  • This slide is included at this point mainly for the golly-gee-whiz factor. 
  • Columnar isn’t columnar isn’t columnar; each product is different. The same goes for row-based. Still, this categorization is the point from which to start.
  • Oracle and SQL Server are single product meant to serve both OLTP and analytics. Any of the main versions of DB2 is something like that too. Sybase, however, separated it’s OLTP and analytic product lines in the mid-1990s.
  • Even when you can make this stuff work at all, it’s hard. That’s a big reason why “disruptive” new analytic DBMS vendors have sprung up.
  • The advantage of hash distribution is that if your join happens to involve the hash key, a lot of the work is already done for you. The disadvantage can be a bit of skew. The advantage usually wins out. Almost every vendor (Kognitio is an exception) encourages hash distribution. Oracle Exadata is an exception too, for different reasons.
  • Fixed configurations – including but not limited to appliances – are more important in row-based MPP than in columnar MPP systems. Oracle Exadata, Teradata, and Netezza are the most visible examples, but another one is IBM’s BCUs.
  • Sybase IQ is the granddaddy, but it’s not MPP. SAND is another old one, but it’s focused more on archiving now. Vertica is a quite successful recent start-up, with >10X the known customers of ParAccel (published or NDA). InfoBright and Kickfire are MySQL storage engines. Kickfire is also an appliance. Exasol is very memory-centric. So is ParAccel’s TPC-H submission. So is SAP Bi Accelerator, but unlike the others it’s not really a DBMS. MonetDB is open source.
  • The big benefit of columnar is at the I/O bottleneck – you don’t have to bring back the whole row. But it also tends to make compression easier. Naïve columnar implementations are terrible at update/load. Any serious commercial product has done engineering work to get around that. For example, Vertica – which is probably the most open about its approach -- pretty much federates queries between disk and what almost amounts to a separate in-memory DBMS.
  • I.e., OLTP system and data warehouse integrated Separate EDW (Enterprise Data Warehouse) Customer-facing data mart that hence requires OLTP-like uptime 100+ terabytes or so Great speed on terabyte-scale data sets at low per-terabyte TCO (counting user data).
  • Here starts the how-to.
  • Databases grow naturally, as more transactions are added over time. Cheaper data warehousing also encourages the retention of more detail, and the addition of new data sources. All three factors boost database size. Users can be either humans or other systems. (Both, in fact, are included in the definition of “user” on the Oracle price list.) Cheap data warehousing also leads to a desire for lower latency, often without clear consideration of the benefits of same.
  • Nobody ever overestimates their need for storage. But people do sometimes overestimate their need for data immediacy.

slide deck slide deck Presentation Transcript

    • How to Select an Analytic DBMS
      • DRAFT!!
    • by
    • Curt A. Monash, Ph.D.
    • President, Monash Research
    • Editor, DBMS2
    • contact @monash.com
    • http://www.monash.com
    • http://www.DBMS2.com
  • Curt Monash
    • Analyst since 1981, own firm since 1987
      • Covered DBMS since the pre-relational days
      • Also analytics, search, etc.
    • Publicly available research
      • Blogs, including DBMS2 ( www.DBMS2.com -- the source for most of this talk)
      • Feed at www.monash.com/blogs.html
      • White papers and more at www.monash.com
    • User and vendor consulting
  • Our agenda
    • Why are there such things as specialized analytic DBMS ?
    • What are the major analytic DBMS product alternatives?
    • What are the most relevant differentiations among analytic DBMS users ?
    • What’s the best process for selecting an analytic DBMS?
  • Why are there specialized analytic DBMS?
    • General-purpose database managers are optimized for updating short rows …
    • … not for analytic query performance
    • 10-100X price/performance differences are not uncommon
    • At issue is the interplay between storage, processors, and RAM
  • Moore’s Law, Kryder’s Law, and a huge exception
    • Growth factors:
    • Transistors/chip :
    • >100,000 since 1971
    • Disk density: >100,000,000 since 1956
    • Disk speed:
    • 12.5 since 1956
    • The disk speed barrier dominates everything!
    05/21/10 DRAFT!! THIRD TEST!!
  • The “1,000,000:1” disk-speed barrier
    • RAM access times ~5-7.5 nanoseconds
      • CPU clock speed <1 nanosecond
      • Interprocessor communication can be ~1,000X slower than on-chip
    • Disk seek times ~2.5-3 milliseconds
      • Limit = ½ rotation
      • i.e., 1/30,000 minutes
      • i.e., 1/500 seconds = 2 ms
    • Tiering brings it closer to ~1,000:1 in practice, but even so the difference is VERY BIG
  • Software strategies to optimize analytic I/O
    • Minimize data returned
      • Classic query optimization
    • Minimize index accesses
      • Page size
    • Precalculate results
      • Materialized views
      • OLAP cubes
    • Return data sequentially
    • Store data in columns
    • Stash data in RAM
  • Hardware strategies to optimize analytic I/O
    • Lots of RAM
    • Parallel disk access!!!
    • Lots of networking
    • Tuned MPP (Massively Parallel Processing) is the key
  • Specialty hardware strategies
    • Custom or unusual chips (rare)
    • Custom or unusual interconnects
    • Fixed configurations of common parts
      • Appliances or recommended configurations
    • And there’s also SaaS
  • 18 contenders (and there are more)
    • Aster Data
    • Dataupia
    • Exasol
    • Greenplum
    • HP Neoview
    • IBM DB2 BCUs
    • Infobright/MySQL
    • Kickfire/MySQL
    • Kognitio
    • Microsoft Madison
    • Netezza
    • Oracle Exadata
    • Oracle w/o Exadata
    • ParAccel
    • SQL Server w/o Madison
    • Sybase IQ
    • Teradata
    • Vertica
  • General areas of feature differentiation
    • Query performance
    • Update/load performance
    • Compatibilities
    • Advanced analytics
    • Alternate datatypes
    • Manageability and availability
    • Encryption and security
  • Major analytic DBMS product groupings
    • Architecture is a hot subject
    • Traditional OLTP
    • Row-based MPP
    • Columnar
    • (Not covered tonight) MOLAP/array-based
  • Traditional OLTP examples
    • Oracle (especially pre-Exadata)
    • IBM DB2 (especially mainframe)
    • Microsoft SQL Server (pre-Madison)
  • Analytic optimizations for OLTP DBMS
    • Two major kinds of precalculation
      • Star indexes
      • Materialized views
    • Other specialized indexes
    • Query optimization tools
    • OLAP extensions
    • SQL 2003
    • Other embedded analytics
  • Drawbacks
    • Complexity and people cost
    • Hardware cost
    • Software cost
    • Absolute performance
  • Legitimate use scenarios
    • When TCO isn’t an issue
      • Undemanding performance (and therefore administration too)
    • When specialized features matter
      • OLTP-like
      • Integrated MOLAP
      • Edge-case analytics
    • Rigid enterprise standards
    • Small enterprise/true single-instance
  • Row-based MPP examples
    • Teradata
    • DB2 (open systems version)
    • Netezza
    • Oracle Exadata (sort of)
    • DATAllegro/Microsoft Madison
    • Greenplum
    • Aster Data
    • Kognitio
    • HP Neoview
  • Typical design choices in row-based MPP
    • “ Random” (hashed or round-robin) data distribution among nodes
    • Large block sizes
      • Suitable for scans rather than random accesses
    • Limited indexing alternatives
      • Or little optimization for using the full boat
    • Carefully balanced hardware
    • High-end networking
  • Tradeoffs among row MPP alternatives
    • Enterprise standards
    • Vendor size
    • Hardware lock-in
    • Total system price
    • Features
  • Columnar DBMS examples
    • Sybase IQ
    • SAND
    • Vertica
    • ParAccel
    • InfoBright
    • Kickfire
    • Exasol
    • MonetDB
    • SAP BI Accelerator (sort of)
  • Columnar pros and cons
    • Bulk retrieval is faster
    • Pinpoint I/O is slower
    • Compression is easier
    • Memory-centric processing is easier
  • Segmentation – a first cut
    • One database to rule them all
    • One analytic database to rule them all
    • Frontline analytic database
    • Very, very big analytic database
    • Big analytic database handled very cost-effectively
  • Basics of systematic segmentation
    • Use cases
    • Metrics
    • Platform preferences
  • Use cases – a first cut
    • Light reporting
    • Diverse EDW
    • Big Data
    • Operational analytics
  • Metrics – a first cut
    • Total user data
      • Below 1-2 TB, references abound
      • 10 TB is another major breakpoint
    • Total concurrent users
      • 5, 15, 50, or 500?
    • Data freshness
      • Hours
      • Minutes
      • Seconds
  • Basic platform issues
    • Enterprise standards
    • Appliance-friendliness
    • Need for MPP?
    • (SaaS)
  • The selection process in a nutshell
    • Figure out what you’re trying to buy
    • Make a shortlist
    • Do free POCs*
    • Evaluate and decide
    • *The only part that’s even slightly specific to the analytic DBMS category
  • Figure out what you’re trying to buy
    • Inventory your use cases
      • Current
      • Known future
      • Wish-list/dream-list future
    • Set constraints
      • People and platforms
      • Money
    • Establish target SLAs
      • Must-haves
      • Nice-to-haves
  • Use-case checklist -- generalities
    • Database growth
      • As time goes by …
      • More detail
      • New data sources
    • Users (human)
    • Users/usage (automated)
    • Freshness (data and query results)
  • Use-case checklist – traditional BI
    • Reports
      • Today
      • Future
    • Dashboards and alerts
      • Today
      • Future
      • Latency
    • Ad-hoc
      • Users
      • Now that we have great response time …
  • Use-case checklist – data mining
    • How much do you think it would improve results to
      • Run more models?
      • Model on more data?
      • Add more variables?
      • Increase model complexity?
    • Which of those can the DBMS help with anyway?
    • What about scoring?
      • Real-time
      • Other latency issues
  • SLA realism
    • What kind of turnaround truly matters?
      • Customer or customer-facing users
      • Executive users
      • Analyst users
    • How bad is downtime?
      • Customer or customer-facing users
      • Executive users
      • Analyst users
  • Short list constraints
    • Cash cost
      • But purchases are heavily negotiated
    • Deployment effort
      • Appliances can be good
    • Platform politics
      • Appliances can be bad
      • You might as well consider incumbent(s)
  • Filling out the shortlist
    • Who matches your requirements in theory?
    • What kinds of evidence do you require?
      • References?
        • How many?
        • How relevant?
      • A careful POC?
      • Analyst recommendations?
      • General “buzz”?
  • A checklist for shortlists
    • What is your tolerance for specialized hardware?
    • What is your tolerance for set-up effort?
    • What is your tolerance for ongoing administrative burden?
    • What are your insert and update requirements?
    • At what volumes will you run fairly simple queries?
    • What are your complex queries like?
    • and, most important,
    • Are you madly in love with your current DBMS?
  • Proof-of-Concept basics
    • The better you match your use cases, the more reliable the POC is
    • Most of the effort is in the set-up
    • You might as well do POCs for several vendors – at (almost) the same time!
    • Where is the POC being held?
  • The three big POC challenges
    • Getting data
      • Real?
        • Politics
        • Privacy
      • Synthetic?
      • Hybrid?
    • Picking queries
      • And more?
    • Realistic simulation(s)
      • Workload
      • Platform
      • Talent
  • POC tips
    • Don’t underestimate requirements
    • Don’t overestimate requirements
    • Get SOME data ASAP
    • Don’t leave the vendor in control
    • Test what you’ll be buying
    • Use the baseball bat
  • Evaluate and decide
    • It all comes down to
    • Cost
    • Speed
    • Risk
    • and in some cases
    • Time to value
    • Upside
  • Further information Curt A. Monash, Ph.D. President, Monash Research Editor, DBMS2 contact @monash.com http://www.monash.com http://www.DBMS2.com