• Like
  • Save
ppt
Upcoming SlideShare
Loading in...5
×
 

ppt

on

  • 862 views

 

Statistics

Views

Total Views
862
Views on SlideShare
862
Embed Views
0

Actions

Likes
0
Downloads
8
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    ppt ppt Presentation Transcript

    • From Moore to Metcalf: The Network as the Next Database Platform HPDC June 2007 Michael Franklin UC Berkeley & Truviso (formerly, Amalgamated Insight)
    • Outline
      • Motivation
      • Stream Processing Overview
      • Micro-Architecture Issues
      • Macro-Architecture Issues
      • Conclusions
    • Moore’s Law vs. Shugart’s: The battle of the bottlenecks
      • Moore : Exponential Processor and Memory improvement.
      • Shugart : Similar law for disk capacity .
      • The yin and yang of DBMS architecture: “ disk-bound ” or “ memory-bound ”?
        • OR are DBMS platforms getting faster or slower relative to the data they need to process?
        • Traditionally, the answer dictates where you innovate.
    • Metcalf’s Law will drive more profound changes
      • Metcalf : “The value of a network grows with the square of the # of participants ” .
      • Practical implication: all interesting data-centric applications become distributed.
        • Already happening:
          • Service-based architectures (and Grid!)
          • Web 2.0
          • Mobile Computing
    • Bell’s law will amplify Metcalf’s
      • Bell : “ Every decade, a new, lower cost, class of computers emerges, defined by platform, interface, and interconnect.”
          • Mainframes 1960s
          • Minicomputers 1970s
          • Microcomputers/PCs 1980s
          • Web-based computing 1990s
          • Devices (Cell phones, PDAs, wireless sensors, RFID) 2000’s
      Enabling a new generation of applications for Operational Visibility, monitoring, and alerting.
    • The Network as platform: Challenges Clickstream Barcodes PoS System RFID Telematics
      • Data Constantly “ On-the-Move ”
      • Increased Data Volume
      • Increased Heterogeneity & Sharing
      • Shrinking decision cycles
      • Increased data and decision complexity
      Mobile Devices Transactional Systems Information Feeds XYZ 23.2; AAA 19; … Sensors Blogs/Web 2.0
      • Lots of challenges:
      • Integration (or “Dataspaces”)
      • Optimization/Planning/Adaptivity
      • Consistency/Master Data Mgmt
      • Continuity/Disaster Mgmt
      • Stream Processing (or data-on-the-move)
      • My current focus (and thus, the focus of this talk) is the latter.
      The Network as platform: Implications
    • Stream Processing
      • My view: Stream Processing will become the 3rd leg of standard IT data management:
        • OLAP splitoff from OLTP for historical reporting.
        • OLSA (On-line Stream Analytics) will handle:
          • Monitoring
          • Alerting
          • Transformation
          • Real-time Visability and Reporting
      • Note: CEP (Complex Event Processing) is a related, emerging technology.
    • Stream Processing + Grid?
      • On-the-fly stream processing required for high-volume data/event generators.
      • Real-time event detection for coordination of distributed observations.
      • Wide-area sensing in environmental macroscopes.
    • Stream Processing - Overview
    • Turning Query Processing Upside Down Static Batch Reports Bulk Load Data Queries Results
      • Batch ETL & load, query later
      • Poor RT monitoring, no replay
      • DB size affects query response
      Traditional Database Approach Data Warehouse
      • Always-on data analysis & alerts
      • RT Monitor & Replay to optimize
      • Consistent sub-second response
      Data Stream Processing Approach Continuous, Visibility, Alerts Live Data Streams Results Data Stream Processor
    • Example 1: Simple Stream Query
      • A SQL smoothing filter to interpolate dropped RFID readings.
      Time Raw readings Smoothed output SELECT distinct tag_id FROM RFID_stream [RANGE ‘5 sec’] GROUP BY tag_id Smoothing Filter
    • Example 2 - Stream/Table Join SELECT T.symbol, AVG(T.price*T.volume) FROM Trades T [RANGE ‘5 sec’ SLIDE ‘3 sec’], SANDP500 S WHERE T.symbol = S.symbol AND T.volume > 5000 GROUP BY T.symbol Every 3 seconds, compute avg transaction value of high-volume trades on S&P 500 stocks, over a 5 second “sliding window” Note: Output is also a Stream Stream Table Window clause
    • Example 3 - Streaming View Positive Suspense: Find the top 100 store-skus ordered by their decreasing positive suspense (inventory - sales). CREATE VIEW StoreSKU (store, sku, sales) as (SELECT P.store, P.sku,SUM(P.qty) as sales FROM POSLog P[RANGE `1 day’ SLIDE `10 min’], Inventory I WHERE P.sku = I.sku and P.store = I.store and P.time > I.time GROUP BY P.store, P.sku) SELECT (I.quantity – S.sales) as positive_suspense FROM StoreSKU S, Inventory I WHERE S.store = I.store and S.sku = I.sku ORDER BY positive_suspense DESC LIMIT 100
    • Application Areas
      • Financial Services: Trading/Capital Mkts
      • SOA/Infrastructure Monitoring; Security
      • Physical (sensor) Monitoring
      • Fraud Detection/Prevention
      • Risk Analytics and Compliance
      • Location-based Services
      • Customer Relationship Management/Retail
      • Supply chain/Logistics
    • Real-Time Monitoring A Flex-based dashboard driven by multiple SQL queries.
    • The “ Jellybean ” Argument
      • Reality: With stream query processing, real-time is cheaper than batch.
        • minimize copies & query start-up overhead
        • takes load off expensive back-end systems
        • rapid application dev & maintenance
      Conventional Wisdom: “can I afford real-time?” Do the benefits justify the cost?
    • Historical Context and status
      • Early stuff:
        • Data “Push”, Pub/Sub, Adaptive Query Proc.
      • Lots of non-SQL approaches
        • Rules systems (e.g., for Fraud Detection)
        • Complex Event Processing (CEP)
      • Research Projects led to companies
        • TelegraphCQ -> Truviso (Amalgamated)
        • Aurora -> Streambase
        • Streams -> Coral8
      • Big guys ready to jump in: BEA, IBM, Oracle, …
    • Requirements
      • High Data Rates: 1K (SOA monitoring) up to 700K rec/sec (option trading)
      • # queries: single digits to 10,000’s
      • Query complexity
        • Full SQL + windows + events + analytics
      • Persistence, replay, historical comparison
      • Huge range of Sources and Sinks
    • Stream QP: Micro-Architecture
    • Single Node Architecture Proprietary APIs © 2007, Amalgamated Insight, Inc. … … Other CQE Instances Other CQE Instances External Archive Continuous Query Engine Adaptive SQL Query Processor Concurrent Query Planner Triggers/ Rules Active Data Replay Database Streaming SQL Query Processor XML CSV MQ MSMQ JDBC .NET Connectors Transformations Ingress XML Message Bus Alerts Pub/Sub Events Connectors Transformations Egress
    • Ingress Issues (performance)
      • Must support high data rates
        • 700K ticks/second for FS
        • Wirespeed for networking/security
      • Minimal latency
        • FS trading particularly sensitive to this
      • Fault tolerance
        • Especially given remote sources
      • Efficient (bulk) data transformation
        • XML, text, binary, …
      • Work well for both push and pull sources
      XML CSV MQ MSMQ JDBC .NET Connectors Transformations Ingress
    • Egress Issues (performance)
      • Must support high data rates
      • Minimal latency
      • Fault tolerance
      • Efficient (bulk) data transformation
      • Buffering/Support for JDBC-style clients
      • Interaction with bulk warehouse loaders
      • Large-scale dissemination (Pub/Sub)
      Prop. APIs XML Message Bus Alerts Pub/Sub Events Connectors Transformations Egress
    • Query Processing (Single)
      • Simple approach:
        • Stream inputs are “scan” operators
        • Adapt operator plumbing to push/pull
          • “ Exchange” operators/ Fjords
      • Need to run lots of these concurrently
        • Index the queries?
        • Scheduling, Memory Mgmt.
      • Must avoid I/O, cache misses to run at speed
      • Predicate push-down - a la Gigascope
      Continuous Query Engine Adaptive SQL Query Processor Concurrent Query Planner Triggers/ Rules Active Data Replay Database Streaming SQL Query Processor
    • QP (continued)
      • Transactional/Correctness issues:
        • Never-ending queries hold locks forever!
        • Need efficient heartbeat mechanism to keep things moving forward.
        • Dealing with corrections (e.g., in financial feeds).
        • Out-of-order/missing data
          • “ ripples in the stream” can hurt clever scheduling mechanisms.
      • Integration with external code:
        • Matlab, R, …, UDFs and UDAs
    • Query Processing (Shared)
      • Previous approach misses huge opportunity.
      • Individual execution leads to linear slowdown
        • Until you fall off the memory cliff!
      • Recall that we know all the queries
        • we know when they will need data
        • we know what data they will need
        • we know what things they will compute
      • Why run them individually (as if we didn’t know any of this)?
    • Shared Processing - The Überquery No redundant modules = Super-Linear Query Scalability SELECT T.symbol, AVG(T.price*T.volume) FROM Trades T [RANGE ‘5 sec’ SLIDE ‘3 sec’], SANDP500 S WHERE T.symbol = S.symbol AND T.volume > 5000 GROUP BY T.symbol Form “query plan” from query text New query plan enters the system Shared Query Engine More queries arrive … Queries get compiled into plans Each plan is folded into the global plan SELECT … FROM … WHERE …. GROUP BY … SELECT … FROM … WHERE …. GROUP BY … SELECT … FROM … WHERE …. GROUP BY …
    • Shared QP raises lots of new issues
      • Scheduling based on data availability/location and work affinity.
      • Lots of bittwiddling: need efficient bitmaps.
      • Query “folding” - how to combine (MQO)
      • On-the-fly query changes.
      • How does shared processing change the traditional architectural tradeoffs?
      • How to process across multiple: cores, dies, boxes, racks, rooms?
      • Refs: NiagaraCQ, CACQ, TelegraphCQ, Sailesh Krishnamurthy’s thesis
    • Archiving - Huge area
      • Most streaming use-cases want access to historical information.
      • Compliance/Risk : also need to keep the data.
        • Science apps need to keep raw data around too.
      • In a high-volume streaming environment, going to disk is an absolute killer.
      • Obviously need clever techniques:
          • Sampling, Index update deferral, load shedding
          • Scheduling based on time-oriented queries
          • Good old buffering/prefetching
      External Archive
    • Stream QP: Macro-Architecture
    • HiFi - Taming the Data Flood Receptors Warehouses, Stores Dock doors, Shelves Regional Centers Headquarters Hierarchical Aggregation: Spatial & Temporal In-network Stream Query Processing and Storage Fast Data Path vs. Slow Data Path
    • Problem: Sensors are Noisy
      • A simple RFID Experiment
      • 2 adjacent shelves, 6 ft. wide
      • 10 EPC-tagged items each, plus 5 moved between them
      • RFID antenna on each shelf
    • Shelf RIFD - Ground Truth
    • Actual RFID Readings “ Restock every time inventory goes below 5”
    • Vice API is a natural place to hide much of the complexity arising from physical devices. VICE: Virtual Device Interface [ Jeffery et al., Pervasive 2006, VLDBJ 07] “ Virtual Device (VICE) API”
    • Query-based Data Cleaning Point Smooth CREATE VIEW smoothed_rfid_stream AS (SELECT receptor_id, tag_id FROM cleaned_rfid_stream [range by ’5 sec’, slide by ’5 sec’] GROUP BY receptor_id, tag_id HAVING count(*) >= count_T)
    • Query-based Data Cleaning Point Smooth Arbitrate CREATE VIEW arbitrated_rfid_stream AS (SELECT receptor_id, tag_id FROM smoothed_rfid_stream rs [range by ’5 sec’, slide by ’5 sec’] GROUP BY receptor_id, tag_id HAVING count(*) >= ALL (SELECT count(*) FROM smoothed_rfid_stream [range by ’5 sec’, slide by ’5 sec’] WHERE tag_id = rs.tag_id GROUP BY receptor_id))
    • After Query-based Cleaning “ Restock every time inventory goes below 5”
    • Adaptive Smoothing [Jeffery et al. VLDB 2006]
    • SQL Abstraction Makes it Easy?
      • Soft Sensors - e.g.,
        • “ LOUDMOUTH” sensor (VLDB 04)
      • Quality and lineage
      • Optimization (power, etc.)
      • Pushdown of external validation information
      • Automatic/Adaptive query placement
      • Data archiving
      • Imperative processing
    • Some Challenges
      • How to run across the full gamut of devices from motes to mainframes?
        • What about running *really* in-the-network?
      • Data/query placement and movement
        • Adaptivity is key
        • “ Push down” is a small subset of this problem.
        • Sharing is also crucial here.
      • Security, encryption, compression, etc.
      • Lots of issues due to devices and “physical world” problems.
    • It’s not just a sensor-net problem OLTP OLTP OLTP OLTP OLTP OLTP OLTP OLTP Batch Load E-com Transactional OLTP ERP CRM SCM OLTP OLTP OLTP Analytical PCs PoS Handhelds Readers Edge Devices Enterprise Apps Transactional Data Stores Integration Bus Reports Analytics OLAP OLAP OLAP OLAP OLAP OLAP OLAP OLAP OLAP Enterprise Data Warehouse Specialized Data Marts Business Intelligence Data Mining Portal Operational BI Alerts Dash- Boards Distributed Data Batch Latency Exploding Data Volumes Query Latency Decision Latency
    • Data Dissemination (Fan-Out)
      • Many applications have large numbers of consumers.
      • Lots of interesting questions on large-scale pub/sub technology.
        • Micro-scale: locality, scheduling, sharing, for huge numbers of subscriptions.
        • Macro-scale: dissemination trees, placement, sharing, …
    • What to measure? (a research opportunity)
      • High Data Rates/Throughput
        • rec/sec; record size
      • Number of concurrent queries.
      • Query complexity
      • Huge range of Sources and Sinks
        • transformation and connector performance
      • Minimal Benchmarking work so far:
        • “ Linear Road” from Aurora group
        • CEP benchmark work by Pedro Bizarro
    • Conclusions
      • Two relevant trends:
        • Metcalf’s Law  DB systems need to become more network-savvy.
        • Jim Gray and others have helped demonstrate the value of SQL to science.
      • Stream query processing is where these two trends meet in the Grid world.
        • A new (3rd) component of data management infrastructure.
      • Lots of open research problems for the HPDC (and DB) community.
    • Resources
      • Research Projects @ Berkeley
        • TelegraphCQ - single-site stream processor
        • HiFi - Distributed/Hierarchical
        • see www.cs.berkeley.edu/~franklin for links/papers
      • Good jumping off point for CEP and related info: www.complexevents.com
      • The company:
      • www. truviso .com