Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
1© Copyright 2013 Pivotal. All rights reserved. 1© Copyright 2013 Pivotal. All rights reserved.
Greenplum Architecture
Ale...
2© Copyright 2013 Pivotal. All rights reserved.
Agenda
Ÿ  Introduction
Ÿ  GPDB Architecture Overview
Ÿ  Distribution and P...
3© Copyright 2013 Pivotal. All rights reserved.
Agenda
Ÿ  Introduction
Ÿ  GPDB Architecture Overview
Ÿ  Distribution and P...
4© Copyright 2013 Pivotal. All rights reserved.
The Pivotal Greenplum Database is…
A Highly-Scalable, Shared-
Nothing Data...
5© Copyright 2013 Pivotal. All rights reserved.
The Pivotal Greenplum Database Overview
Ÿ  A highly scalable shared-nothin...
6© Copyright 2013 Pivotal. All rights reserved.
MPP 101: Performance Through Parallelism
Network Interconnect
... ...
.......
7© Copyright 2013 Pivotal. All rights reserved.
MPP 102: True High Speed Loading
Ÿ  Parallelizes Everything
–  All nodes c...
8© Copyright 2013 Pivotal. All rights reserved.
MPP 201: Start Small and Scale as Needed
Ÿ  Advantages:
-  Scale In-Place
...
9© Copyright 2013 Pivotal. All rights reserved.
Advanced MPP: Polymorphic Storage™
Ÿ  Columnar storage is well suited to
s...
10© Copyright 2013 Pivotal. All rights reserved.
Unlocking the Potential of Column-Oriented Data
Advanced MPP: Run Length ...
11© Copyright 2013 Pivotal. All rights reserved.
gNet Software Interconnect
Ÿ  A supercomputing-based “soft-switch” respon...
12© Copyright 2013 Pivotal. All rights reserved.
Parallel Query Optimizer
Ÿ  Cost-based optimization looks for the
most ef...
13© Copyright 2013 Pivotal. All rights reserved.
The Pivotal Greenplum Database at a Glance
PRODUCT
FEATURES
CLIENT ACCESS...
14© Copyright 2013 Pivotal. All rights reserved.
The Pivotal Greenplum Database Overview
Ÿ  A highly scalable shared-nothi...
15© Copyright 2013 Pivotal. All rights reserved.
Analytical Architecture Overview
Greenplum gNet
Data Access & Query Layer...
16© Copyright 2013 Pivotal. All rights reserved.
Easy to Use (at Scale) with SQL
SELECT
(madlib.linregr(earns/hours, array...
17© Copyright 2013 Pivotal. All rights reserved.
Integrated with Tools/Languages, incl. R
•  List the columns
in the table...
18© Copyright 2013 Pivotal. All rights reserved.
The Pivotal Greenplum Database Overview
Ÿ  A highly scalable shared-nothi...
19© Copyright 2013 Pivotal. All rights reserved.
High Performance Integration with Hadoop
Text Binary
User-
Defined
gnet f...
20© Copyright 2013 Pivotal. All rights reserved.
Comprehensive High Availability
Ÿ  Master and Segment Mirroring
with bloc...
21© Copyright 2013 Pivotal. All rights reserved.
Comprehensive Backup/Restore
Ÿ  Full and Incremental backup support with ...
22© Copyright 2013 Pivotal. All rights reserved.
Summary:
The Pivotal Greenplum Database Delivers…
A Highly-Scalable,
Shar...
23© Copyright 2013 Pivotal. All rights reserved.
Agenda
Ÿ  Introduction
Ÿ  GPDB Architecture Overview
Ÿ  Distribution and ...
24© Copyright 2013 Pivotal. All rights reserved.
MPP Shared Nothing Architecture
Standby
Master
Segment Host with one or m...
25© Copyright 2013 Pivotal. All rights reserved.
Master Host
Master Segment
Catalog
Query Optimizer
Distributed TM
Dispatc...
26© Copyright 2013 Pivotal. All rights reserved.
Query Optimizer
Local Storage
Master Segment
CatalogDistributed TM
Interc...
27© Copyright 2013 Pivotal. All rights reserved.
Query Dispatcher
Local Storage
Master Segment
CatalogDistributed TM
Inter...
28© Copyright 2013 Pivotal. All rights reserved.
Query Executor
Local Storage
Master Segment
CatalogDistributed TM
Interco...
29© Copyright 2013 Pivotal. All rights reserved.
Interconnect
Local Storage
Master Segment
CatalogDistributed TM
Query Opt...
30© Copyright 2013 Pivotal. All rights reserved.
System Catalog
Local Storage
Master Segment
Query Executor
Distributed TM...
31© Copyright 2013 Pivotal. All rights reserved.
Distributed Transaction Management
Local Storage
Master Segment
Query Exe...
32© Copyright 2013 Pivotal. All rights reserved.
GPDB High Availability
Ÿ  Master Host mirroring
–  Warm Standby Master Ho...
33© Copyright 2013 Pivotal. All rights reserved.
Master Mirroring
Ÿ  Warm Standby Master enabled at initialization or on a...
34© Copyright 2013 Pivotal. All rights reserved.
Segment Mirroring
Ÿ  Enabled at initialization or on an active system usi...
35© Copyright 2013 Pivotal. All rights reserved.
Fault Detection and Recovery
Ÿ  ftsprobe fault detection process monitors...
36© Copyright 2013 Pivotal. All rights reserved.
Agenda
Ÿ  Introduction
Ÿ  GPDB Architecture Overview
Ÿ  Distribution and ...
37© Copyright 2013 Pivotal. All rights reserved.
CREATE TABLE
Ÿ  Every table has a distribution method
Ÿ  DISTRIBUTED BY (...
38© Copyright 2013 Pivotal. All rights reserved.
Data Distribution: The Key to Parallelism
The primary strategy and goal i...
39© Copyright 2013 Pivotal. All rights reserved.
Master
Parallel Data Scans
SELECT COUNT(*)
FROM orders
WHERE order_date >...
40© Copyright 2013 Pivotal. All rights reserved.
DISTRIBUTED RANDOMLY
Ÿ Uses a random algorithm
– Distributes data across ...
41© Copyright 2013 Pivotal. All rights reserved.
DISTRIBUTED BY (column_name)
Ÿ  For large tables significant performance ...
42© Copyright 2013 Pivotal. All rights reserved.
Use the Same Distribution Key for Commonly Joined
Tables
= Distribute on ...
43© Copyright 2013 Pivotal. All rights reserved.
Redistribution Motion
WHERE customer.c_customer_id = freg_shopper.f_custo...
44© Copyright 2013 Pivotal. All rights reserved.
Broadcast Motion
WHERE customer.c_statekey = state.s_statekey
The state t...
45© Copyright 2013 Pivotal. All rights reserved.
Commonly Joined Tables Use the Same Data
Type for Distribution Keys
custo...
46© Copyright 2013 Pivotal. All rights reserved.
Hash Distributions: Data Skew and
Computational Skew
Ÿ  Select a distribu...
47© Copyright 2013 Pivotal. All rights reserved.
CREATE TABLE
Ÿ  Reduces the amount of data to be scanned by reading only
...
48© Copyright 2013 Pivotal. All rights reserved.
Multi-Level Partitioning….
Use Hash Distribution to evenly
spread data ac...
49© Copyright 2013 Pivotal. All rights reserved.
Segment 1A Segment 1B Segment 1C Segment 1D
Segment 2A Segment 2B Segment...
50© Copyright 2013 Pivotal. All rights reserved.
Partitioning Guidelines
Ÿ  Use table partitioning on large tables to impr...
51© Copyright 2013 Pivotal. All rights reserved.
Agenda
Ÿ  Introduction
Ÿ  GPDB Architecture Overview
Ÿ  Distribution and ...
52© Copyright 2013 Pivotal. All rights reserved.
GPDB Data Loading Options
Loading Method Common Uses Examples
INSERTS •  ...
53© Copyright 2013 Pivotal. All rights reserved.
Example Load Architectures
Master Host
Segment Host Segment Host Segment ...
54© Copyright 2013 Pivotal. All rights reserved.
External Tables
Ÿ  Access external files as if they were regular database...
55© Copyright 2013 Pivotal. All rights reserved.
File Based External Tables
Ÿ  Specify format of input files
–  FORMAT cla...
56© Copyright 2013 Pivotal. All rights reserved.
Web Based External Tables
Ÿ  Command based
–  Output of shell command or ...
57© Copyright 2013 Pivotal. All rights reserved.
Load Using Regular External Tables
Ÿ  File based (flat files)
–  gpfdist ...
58© Copyright 2013 Pivotal. All rights reserved.
Load Using External Web Tables
Ÿ  Shell command or script based
=# CREATE...
59© Copyright 2013 Pivotal. All rights reserved.
COPY
Ÿ  Quick and easy
Ÿ  Recommended for small loads
–  Not recommended ...
60© Copyright 2013 Pivotal. All rights reserved.
GPLOAD
Ÿ  Interface to readable external tables
–  Invokes gpfdist for pa...
61© Copyright 2013 Pivotal. All rights reserved.
Agenda
Ÿ  Introduction
Ÿ  GPDB Architecture Overview
Ÿ  Distribution and ...
62© Copyright 2013 Pivotal. All rights reserved.
ANALYZE and Database Statistics
Ÿ  Updated statistics are critical for th...
63© Copyright 2013 Pivotal. All rights reserved.
• For very large tables it may not be feasible to run
ANALYZE on the enti...
64© Copyright 2013 Pivotal. All rights reserved.
VACUUM
Ÿ  VACUUM reclaims physical space on disk from deleted or
updated ...
65© Copyright 2013 Pivotal. All rights reserved.
Free Space Map
Ÿ  Expired rows are tracked in the free space map
Ÿ  Free ...
66© Copyright 2013 Pivotal. All rights reserved.
Agenda
Ÿ  Introduction
Ÿ  GPDB Architecture Overview
Ÿ  Distribution and ...
67© Copyright 2013 Pivotal. All rights reserved.
GPDB Optimization and Performance Tuning
Ÿ  Review optimization, tuning a...
68© Copyright 2013 Pivotal. All rights reserved.
Data Types and Byte Alignment
Ÿ  Lay out columns in heap tables as follow...
69© Copyright 2013 Pivotal. All rights reserved.
Set Based versus Row Based
Ÿ  PL/SQL and other procedural languages utili...
70© Copyright 2013 Pivotal. All rights reserved.
Agenda
Ÿ  Introduction
Ÿ  GPDB Architecture Overview
Ÿ  Distribution and ...
71© Copyright 2013 Pivotal. All rights reserved.
Competitive Solutions
GPDB TD VERT EXA NZ
Using as ETL engine
72© Copyright 2013 Pivotal. All rights reserved.
Competitive Solutions
GPDB TD VERT EXA NZ
Using as ETL engine
Using as BI...
73© Copyright 2013 Pivotal. All rights reserved.
Competitive Solutions
GPDB TD VERT EXA NZ
Using as ETL engine
Using as BI...
74© Copyright 2013 Pivotal. All rights reserved.
Competitive Solutions
GPDB TD VERT EXA NZ
Using as ETL engine
Using as BI...
75© Copyright 2013 Pivotal. All rights reserved.
Competitive Solutions
GPDB TD VERT EXA NZ
Using as ETL engine
Using as BI...
76© Copyright 2013 Pivotal. All rights reserved.
Competitive Solutions
GPDB TD VERT EXA NZ
Using as ETL engine
Using as BI...
77© Copyright 2013 Pivotal. All rights reserved.
Competitive Solutions
GPDB TD VERT EXA NZ
Using as ETL engine
Using as BI...
78© Copyright 2013 Pivotal. All rights reserved.
Summary
Ÿ  Greenplum is the first open source MPP database
Ÿ  With over 4...
BUILT FOR THE SPEED OF BUSINESS
Upcoming SlideShare
Loading in …5
×

Greenplum Architecture

This is the presentation I delivered on Hadoop User Group Ireland meetup in Dublin on Nov 28 2015. It covers at glance the architecture of GPDB and most important its features. Sorry for the colors - Slideshare is crappy with PDFs

Greenplum Architecture

  1. 1. 1© Copyright 2013 Pivotal. All rights reserved. 1© Copyright 2013 Pivotal. All rights reserved. Greenplum Architecture Alexey Grishchenko HUG Meetup 28.11.2015
  2. 2. 2© Copyright 2013 Pivotal. All rights reserved. Agenda Ÿ  Introduction Ÿ  GPDB Architecture Overview Ÿ  Distribution and Partitioning Ÿ  Loading External Data Ÿ  Maintenance Procedures Ÿ  Performance Considerations Ÿ  Competitive
  3. 3. 3© Copyright 2013 Pivotal. All rights reserved. Agenda Ÿ  Introduction Ÿ  GPDB Architecture Overview Ÿ  Distribution and Partitioning Ÿ  Loading External Data Ÿ  Maintenance Procedures Ÿ  Performance Considerations Ÿ  Competitive
  4. 4. 4© Copyright 2013 Pivotal. All rights reserved. The Pivotal Greenplum Database is… A Highly-Scalable, Shared- Nothing Database •  Leading MPP architecture, including a patented next- generation optimizer •  Optimized architecture and features for loading and queries •  Start small, scale as needed •  Polymorphic storage, compression, partitioning A Platform for Advanced Analytics on Any (and All) Data •  Rich ecosystem (SAS, R, BI & ETL tools) •  In-DB Analytics (MADlib, Custom, languages: R, Java, Python, PERL, C, C++) •  High degree of SQL completeness so analysts can use a language they know •  Domain: Geospatial, Text processing (GPText) An Enterprise Ready Platform Capable of Flexing With Your Needs •  Available as needed – either as an appliance or software •  Secures data in-place, in flight, and with authentication to suit •  Capable of managing a variety of mixed workloads
  5. 5. 5© Copyright 2013 Pivotal. All rights reserved. The Pivotal Greenplum Database Overview Ÿ  A highly scalable shared-nothing database Ÿ  A platform for advanced analytics on any (and all) data Ÿ  An enterprise ready platform capable of flexing with your needs
  6. 6. 6© Copyright 2013 Pivotal. All rights reserved. MPP 101: Performance Through Parallelism Network Interconnect ... ... ...... Master Servers & Name Nodes Query planning & dispatch Segment Servers & Data Nodes Query processing & data storage External Sources Loading, streaming, etc. SQL
  7. 7. 7© Copyright 2013 Pivotal. All rights reserved. MPP 102: True High Speed Loading Ÿ  Parallelizes Everything –  All nodes can process loading requests –  No subsequent “Data Reorganization” steps. –  Scales at over 10+TB/hr. per rack. –  Only constrained by the speed of the source Ÿ  Automates Parallelism –  GPLoad utility automatically parallelizes file-based loading –  Integrated with ETL products to parallelize ETL-based loading with minimal added effort Network Interconnect ... ... ...... Master Servers & Name Nodes Query planning & dispatch Segment Servers & Data Nodes Query processing & data storage External Sources Loading, streaming, etc. SQL ODBC/JDBC File Systems PHD / Hadoop GemFire Streaming (Social Media, events, etc.) RDBMS PARALLEL LOADING
  8. 8. 8© Copyright 2013 Pivotal. All rights reserved. MPP 201: Start Small and Scale as Needed Ÿ  Advantages: -  Scale In-Place -  No Forklifting -  Immediately Usable -  Simple Process... ... ...... New Segment Servers Query planning & dispatch
  9. 9. 9© Copyright 2013 Pivotal. All rights reserved. Advanced MPP: Polymorphic Storage™ Ÿ  Columnar storage is well suited to scanning a large percentage of the data Ÿ  Row storage excels at small lookups Ÿ  Most systems need to do both Ÿ  Row and column orientation can be mixed within a table or database Ÿ  Both types can be dramatically more efficient with compression Ÿ  Compression is definable column by column: Ÿ  Blockwise: Gzip1-9 & QuickLZ Ÿ  Streamwise: Run Length Encoding (RLE) (levels 1-4) Ÿ  Flexible indexing, partitioning enable more granular control and enable true ILM TABLE ‘SALES’ Mar Apr May Jun Jul Aug Sept Oct Nov Row-oriented for Small ScansColumn-oriented for Full Scans
  10. 10. 10© Copyright 2013 Pivotal. All rights reserved. Unlocking the Potential of Column-Oriented Data Advanced MPP: Run Length Encoding With columnar storage and RLE, this data… Date User_ID Revenue 1/1/05 13111123 850 1/1/05 32343122 1500 12/31/13 45322323 340 12/31/13 39923001 1100 …can be stored like this… (10 billion rows, ~225GB on disk) Date 1/1/05 … 12/31/13 Plus “length of run” (3288 rows, 26KB on disk) (10 billion values – also compressed, ~37GB on disk) Revenue 850 1500 340 1100 Pointers to correlated rows
  11. 11. 11© Copyright 2013 Pivotal. All rights reserved. gNet Software Interconnect Ÿ  A supercomputing-based “soft-switch” responsible for –  Efficiently pumping streams of data between motion nodes during query-plan execution –  Delivers messages, moves data, collects results, and coordinates work among the segments in the system gNet Software Interconnect
  12. 12. 12© Copyright 2013 Pivotal. All rights reserved. Parallel Query Optimizer Ÿ  Cost-based optimization looks for the most efficient plan Ÿ  Physical plan contains scans, joins, sorts, aggregations, etc. Ÿ  Global planning avoids sub-optimal ‘SQL pushing’ to segments Ÿ  Directly inserts ‘motion’ nodes for inter-segment communication Gather Motion 4:1(Slice 3) Sort HashAggregate HashJoin Redistribute Motion 4:4(Slice 1) HashJoin Hash Hash HashJoin Hash Broadcast Motion 4:4(Slice 2) Seq Scan on motion Seq Scan on customerSeq Scan on lineitem Seq Scan on orders PHYSICAL EXECUTION PLAN FROM SQL OR MAPREDUCE
  13. 13. 13© Copyright 2013 Pivotal. All rights reserved. The Pivotal Greenplum Database at a Glance PRODUCT FEATURES CLIENT ACCESS & TOOLS Multi-Level Fault Tolerance Shared-Nothing MPP Parallel Query Optimizer Polymorphic Data Storage™ CLIENT ACCESS ODBC, JDBC, OLEDB, MapReduce, etc. CORE MPP ARCHITECTURE Parallel Dataflow Engine Software Interconnect Scatter/Gather Streaming™ Data Loading Online System Expansion Workload Management ADAPTIVE SERVICES LOADING & EXT. ACCESS Petabyte-Scale Loading Trickle Micro-Batching Anywhere Data Access STORAGE & DATA ACCESS Hybrid Storage & Execution (Row- & Column-Oriented) In-Database Compression Multi-Level Partitioning Indexes – Btree, Bitmap, etc LANGUAGE SUPPORT Comprehensive SQL Native MapReduce SQL 2003 OLAP Extensions Programmable Analytics 3rd PARTY TOOLS BI Tools, ETL Tools Data Mining, etc ADMIN TOOLS GP Performance Monitor pgAdmin3 for GPDB
  14. 14. 14© Copyright 2013 Pivotal. All rights reserved. The Pivotal Greenplum Database Overview Ÿ  A highly scalable shared-nothing database Ÿ  A platform for advanced analytics on any (and all) data Ÿ  An enterprise ready platform capable of flexing with your needs
  15. 15. 15© Copyright 2013 Pivotal. All rights reserved. Analytical Architecture Overview Greenplum gNet Data Access & Query Layer Stored Procedures MapReduce Polymorphic Storage SQL 2003/ 2008 OLAP SQL GREENPLUM DATABASE ODBC JDBC In-Database Analytics
  16. 16. 16© Copyright 2013 Pivotal. All rights reserved. Easy to Use (at Scale) with SQL SELECT (madlib.linregr(earns/hours, array[rooms, notcitizen, married, ... ])).* FROM (SELECT * FROM use_r.census) AS foo; Rows Elapsed Time 3 million .5 s 10 million 1 s 100 million 5 s 500 million 23 s 1 billion 46 s 10 billion 453 s
  17. 17. 17© Copyright 2013 Pivotal. All rights reserved. Integrated with Tools/Languages, incl. R •  List the columns in the table and preview the first 3 rows of data (the limit is passed through to the db) •  Examine the resulting model •  Load PivotalR Library •  Create the “houses” object as a proxy object in R. The data is not loaded into R •  Run a linear regression. This is executed in-database. •  The model is stored in-database, greatly simplifying the development of scoring applications
  18. 18. 18© Copyright 2013 Pivotal. All rights reserved. The Pivotal Greenplum Database Overview Ÿ  A highly scalable shared-nothing database Ÿ  A platform for advanced analytics on any (and all) data Ÿ  An enterprise ready platform capable of flexing with your needs
  19. 19. 19© Copyright 2013 Pivotal. All rights reserved. High Performance Integration with Hadoop Text Binary User- Defined gnet for Hadoop Parallel Query Access •  Connect any data set in Hadoop to GPDB SQL Engine •  Process Hadoop data in place •  Parallelize movement of data to/from Hadoop thanks to GPDB market leading data sharing performance •  Supported formats: -  Text (compressed and uncompressed) -  binary -  proprietary/user-defined •  Support for Pivotal HD, MapR, Hortonworks, Cloudera
  20. 20. 20© Copyright 2013 Pivotal. All rights reserved. Comprehensive High Availability Ÿ  Master and Segment Mirroring with block level replication –  Low resource consumption –  Differential resynch capable for fast recovery –  Minimize interdependencies! Ÿ  Segment servers support multiple database instances –  Primary instances that actively process queries –  Standby mirror instances Set of Active Segment Instances P1 P2 P3 M6 M8 M10 P4 P5 P6 M1 M9 M11 P7 P8 P9 M2 M4 M12 P10 P11 P12 M3 M5 M7 Segment Server 1 Segment Server 2 Segment Server 3 Segment Server 4
  21. 21. 21© Copyright 2013 Pivotal. All rights reserved. Comprehensive Backup/Restore Ÿ  Full and Incremental backup support with in-database tools Ÿ  Incremental backup –  Only changed partitions are pulled for the backup –  Restore to any point-in-time through support of “synthetic restores” –  Synthetic restores automatically assemble the right backup based on the point-in-time specified: manual backup specification is not required Ÿ  Deep support for Data Domain –  WAN replication of backup sets to remote DR sites –  Granular delta-only backup support
  22. 22. 22© Copyright 2013 Pivotal. All rights reserved. Summary: The Pivotal Greenplum Database Delivers… A Highly-Scalable, Shared-Nothing Database •  Leading MPP architecture, including a patented next- generation optimizer •  Optimized architecture and features for loading and queries •  Start small, scale as needed •  Polymorphic storage, compression, partitioning A Platform for Advanced Analytics on Any (and All) Data •  Rich ecosystem (SAS, R, Chorus Studio, BI & ETL tools) •  In-DB Analytics (MADlib, Custom, languages: R, Java, Python, PERL, C, C++) •  High degree of SQL completeness so analysts can use a language they know •  Domain: Geospatial, Text processing (GPText) An Enterprise Ready Platform Capable of Flexing With Your Needs •  Available as needed – either as an appliance or software •  Secures data in-place, in flight, and with authentication to suit •  Capable of managing a variety of mixed workloads
  23. 23. 23© Copyright 2013 Pivotal. All rights reserved. Agenda Ÿ  Introduction Ÿ  GPDB Architecture Overview Ÿ  Distribution and Partitioning Ÿ  Loading External Data Ÿ  Maintenance Procedures Ÿ  Performance Considerations Ÿ  Competitive
  24. 24. 24© Copyright 2013 Pivotal. All rights reserved. MPP Shared Nothing Architecture Standby Master Segment Host with one or more Segment Instances Segment Instances process queries in parallel Flexible framework for processing large datasets High speed interconnect for continuous pipelining of data processing … Master Host SQL Master Host and Standby Master Host Master coordinates work with Segment Hosts Interconnect Segment Host Segment Instance Segment Instance Segment Instance Segment Instance Segment Hosts have their own CPU, disk and memory (shared nothing) Segment Host Segment Instance Segment Instance Segment Instance Segment Instance node1 Segment Host Segment Instance Segment Instance Segment Instance Segment Instance node2 Segment Host Segment Instance Segment Instance Segment Instance Segment Instance node3 Segment Host Segment Instance Segment Instance Segment Instance Segment Instance nodeN
  25. 25. 25© Copyright 2013 Pivotal. All rights reserved. Master Host Master Segment Catalog Query Optimizer Distributed TM DispatchQuery Executor Parser enforces syntax, semantics and produces a parse tree Client Accepts client connections, incoming user requests and performs authentication Parser Master Host
  26. 26. 26© Copyright 2013 Pivotal. All rights reserved. Query Optimizer Local Storage Master Segment CatalogDistributed TM Interconnect DispatcherQuery Executor Parser Query Optimizer Consumes the parse tree and produces the query plan Query plan contains how the query is executed (e.g. Hash join versus Merge join) Master Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage
  27. 27. 27© Copyright 2013 Pivotal. All rights reserved. Query Dispatcher Local Storage Master Segment CatalogDistributed TM Interconnect Query Optimizer Query Executor Parser Dispatcher Responsible for communicating the query plan to segments Allocates cluster resources required to perform the job and accumulating/ presenting final results Master Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage
  28. 28. 28© Copyright 2013 Pivotal. All rights reserved. Query Executor Local Storage Master Segment CatalogDistributed TM Interconnect Query Optimizer Query Dispatcher Parser Query Executor Responsible for executing the steps in the plan (e.g. open file, iterate over tuples) Communicates its intermediate results to other executor processes Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Master Host
  29. 29. 29© Copyright 2013 Pivotal. All rights reserved. Interconnect Local Storage Master Segment CatalogDistributed TM Query Optimizer Query Dispatcher Parser Query Executor Interconnect Responsible for serving tuples from one segment to another to perform joins, etc. Uses UDP for optimal performance and scalability Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Master Host
  30. 30. 30© Copyright 2013 Pivotal. All rights reserved. System Catalog Local Storage Master Segment Query Executor Distributed TM Interconnect Query Optimizer Query Dispatcher Parser Catalog Stores and manages metadata for databases, tables, columns, etc. Master keeps a copy of the metadata coordinated on every segment host Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Master Host
  31. 31. 31© Copyright 2013 Pivotal. All rights reserved. Distributed Transaction Management Local Storage Master Segment Query Executor Catalog Interconnect Query Optimizer Query Dispatcher Parser Distributed TM Segments have their own commit and replay logs and decide when to commit, abort for their own transactions DTM resides on the master and coordinates the commit and abort actions of segments Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Segment Host Segment Instance Local TM Query Executor Catalog Local Storage Segment Instance Local TM Query Executor Catalog Local Storage Master Host
  32. 32. 32© Copyright 2013 Pivotal. All rights reserved. GPDB High Availability Ÿ  Master Host mirroring –  Warm Standby Master Host ▪  Replica of Master Host system catalogs –  Eliminates single point of failure –  Synchronization process between Master Host and Standby Master Host ▪  Uses replication logs Ÿ  Segment mirroring –  Creates a mirror segment for every primary segment ▪  Uses a file block replication process –  If a primary segment becomes unavailable automatic failover to the mirror
  33. 33. 33© Copyright 2013 Pivotal. All rights reserved. Master Mirroring Ÿ  Warm Standby Master enabled at initialization or on an active system using gpinitstandby Ÿ  If Master Host becomes unavailable the replication process is stopped –  Replication logs are used to reconstruct the state of the master at the time of failure –  Standby Master Host can be activated to start at the last successful transaction completed by the Master Host ▪  Use gpactivatestandby
  34. 34. 34© Copyright 2013 Pivotal. All rights reserved. Segment Mirroring Ÿ  Enabled at initialization or on an active system using gpaddmirrors Ÿ  Can be configured on same array of hosts or a system outside of the array Ÿ  If a primary segment becomes unavailable automatic failover to the mirror
  35. 35. 35© Copyright 2013 Pivotal. All rights reserved. Fault Detection and Recovery Ÿ  ftsprobe fault detection process monitors and scans segments and database processes at configurable intervals Ÿ  Use gpstate utility to verify status of primary and mirror segments Ÿ  Query gp_segment_configuration catalog table for detailed information about a failed segment ▪  $ psql -c "SELECT * FROM gp_segment_configuration WHERE status='d';" Ÿ  When ftsprobe cannot connect to a segment it marks it as down –  Will remain down until administrator manually recovers the failed segment using gprecoverseg utility Ÿ  Automatic failover to the mirror segment –  Subsequent connection requests are switched to the mirror segment
  36. 36. 36© Copyright 2013 Pivotal. All rights reserved. Agenda Ÿ  Introduction Ÿ  GPDB Architecture Overview Ÿ  Distribution and Partitioning Ÿ  Loading External Data Ÿ  Maintenance Procedures Ÿ  Performance Considerations Ÿ  Competitive
  37. 37. 37© Copyright 2013 Pivotal. All rights reserved. CREATE TABLE Ÿ  Every table has a distribution method Ÿ  DISTRIBUTED BY (column) –  Uses a hash distribution Ÿ  DISTRIBUTED RANDOMLY –  Uses a random distribution which is not guaranteed to provide a perfectly even distribution => CREATE TABLE products (name varchar(40), prod_id integer, supplier_id integer) DISTRIBUTED BY (prod_id); Define Data Distributions
  38. 38. 38© Copyright 2013 Pivotal. All rights reserved. Data Distribution: The Key to Parallelism The primary strategy and goal is to spread data evenly across as many nodes (and disks) as possible 43 Oct 20 2005 12 64 Oct 20 2005 111 45 Oct 20 2005 42 46 Oct 20 2005 64 77 Oct 20 2005 32 48 Oct 20 2005 12 Order Order# Order Date Customer ID Greenplum Database High Speed Loader 50 Oct 20 2005 34 56 Oct 20 2005 213 63 Oct 20 2005 15 44 Oct 20 2005 102 53 Oct 20 2005 82 55 Oct 20 2005 55
  39. 39. 39© Copyright 2013 Pivotal. All rights reserved. Master Parallel Data Scans SELECT COUNT(*) FROM orders WHERE order_date >= ‘Oct 20 2007’ AND order_date < ‘Oct 27 2007’ 4,423,323 Each Segment Scans Data Simultaneously Segment 1A Segment 1B Segment 1C Segment 1D Segment 2A Segment 2B Segment 2C Segment 2D Segment 3A Segment 3B Segment 3C Segment 3D Segment 1A Segment 1B Segment 1C Segment 1D Segment 2A Segment 2B Segment 2C Segment 2D Segment 3A Segment 3B Segment 3C Segment 3D Segments Return ResultsReturn ResultsSend Plan to SegmentsDevelop Query Plan
  40. 40. 40© Copyright 2013 Pivotal. All rights reserved. DISTRIBUTED RANDOMLY Ÿ Uses a random algorithm – Distributes data across all segments – Minimal data skew but not guaranteed to have a perfectly even distribution Ÿ Any query that joins to a table that is distributed randomly will require a motion operation – Redistribute motion – Broadcast motion
  41. 41. 41© Copyright 2013 Pivotal. All rights reserved. DISTRIBUTED BY (column_name) Ÿ  For large tables significant performance gains can be obtained with local joins (co-located joins) –  Distribute on the same column for tables commonly joined together ▪  WHERE clause Ÿ  Join is performed within the segment –  Segment operates independently of other segments Ÿ  Eliminates or minimizes motion operations –  Broadcast motion –  Redistribute motion
  42. 42. 42© Copyright 2013 Pivotal. All rights reserved. Use the Same Distribution Key for Commonly Joined Tables = Distribute on the same key used in the join (WHERE clause) to obtain local joins Segment 1A Segment 2A customer (c_customer_id) freg_shopper (f_customer_id) customer (c_customer_id) freq_shopper (f_customer_id) = =
  43. 43. 43© Copyright 2013 Pivotal. All rights reserved. Redistribution Motion WHERE customer.c_customer_id = freg_shopper.f_customer_id freq_shopper table is dynamically redistributed on f_customer_id Segment 1A customer (c_customer_id) customer_id =102 freg_shopper (f_trans_number) Segment 2A customer (c_customer_id) customer_id=745 freq_shopper (f_trans_number) customer_id=102 Segment 3A customer (c_customer_id) freq_shopper (f_trans_number) customer_id=745
  44. 44. 44© Copyright 2013 Pivotal. All rights reserved. Broadcast Motion WHERE customer.c_statekey = state.s_statekey The state table is dynamically broadcasted to all segments Segment 1A Segment 2A Segment 3A customer (c_customer_id) state (s_statekey) AK, AL, AZ, CA… customer (c_customer_id) state (s_statekey) AK, AL, AZ, CA… customer (c_customer_id) state (s_statekey) AK, AL, AZ, CA…
  45. 45. 45© Copyright 2013 Pivotal. All rights reserved. Commonly Joined Tables Use the Same Data Type for Distribution Keys customer (c_customer_id) 745::int freq_shopper (f_customer_id) 745::varchar(10) Ÿ  Values might appear the same but they are stored differently at the disk level Ÿ  Values might appear the same but they HASH to different values •  Resulting in like rows being stored on different segments •  Requiring a redistribution before the tables can be joined
  46. 46. 46© Copyright 2013 Pivotal. All rights reserved. Hash Distributions: Data Skew and Computational Skew Ÿ  Select a distribution key with unique values and high cardinality that will not result in data skew –  Do not distribute on boolean keys and keys with low cardinality ▪  The system distributes rows with the same hash value to the same segment instance therefore resulting in the data being located on only a few segments Ÿ  Select a distribution key that will not result in computational skew (in flight when a query is executing) –  Operations on columns that have low cardinality or non-uniform distribution
  47. 47. 47© Copyright 2013 Pivotal. All rights reserved. CREATE TABLE Ÿ  Reduces the amount of data to be scanned by reading only the relevant data needed to satisfy a query –  The goal is to achieve partition elimination Ÿ  Supports range partitioning and list partitioning Ÿ  Uses table inheritance and constraints –  Persistent relationship between parent and child tables Define Partitioning
  48. 48. 48© Copyright 2013 Pivotal. All rights reserved. Multi-Level Partitioning…. Use Hash Distribution to evenly spread data across all instances Use Range Partition within an instance to minimize scan work Segment 1A Segment 1B Segment 1C Segment 1D Segment 2A Segment 2B Segment 2C Segment 2D Segment 3A Segment 3B Segment 3C Segment 3D Jan 2007 Feb 2007 Mar 2007 Apr 2007 May 2007 Jun 2007 Jul 2007 Aug 2007 Sep 2007 Oct 2007 Nov 2007 Dec 2007
  49. 49. 49© Copyright 2013 Pivotal. All rights reserved. Segment 1A Segment 1B Segment 1C Segment 1D Segment 2A Segment 2B Segment 2C Segment 2D Segment 3A Segment 3B Segment 3C Segment 3D Segment 1A Segment 1B Segment 1C Segment 1D Segment 2A Segment 2B Segment 2C Segment 2D Segment 3A Segment 3B Segment 3C Segment 3D Segment 1A Segment 1B Segment 1C Segment 1D Segment 2A Segment 2B Segment 2C Segment 2D Segment 3A Segment 3B Segment 3C Segment 3D …Further Improve Scan Times SELECT COUNT(*) FROM orders WHERE order_date >= ‘Oct 20 2007’ AND order_date < ‘Oct 27 2007’ VS Hash Partition Multi-Level Partition Segment 1A Segment 1B Segment 1C Segment 1D Segment 2A Segment 2B Segment 2C Segment 2D Segment 3A Segment 3B Segment 3C Segment 3D
  50. 50. 50© Copyright 2013 Pivotal. All rights reserved. Partitioning Guidelines Ÿ  Use table partitioning on large tables to improve query performance –  Table partitioning is not a substitute for distributions Ÿ  Use if the table can be divided into rather equal parts based on a defining criteria –  For example, range partitioning on date –  No overlapping ranges or duplicate values Ÿ  And the defining partitioning criteria is the same access pattern used in query predicates –  WHERE date = ‘1/30/2012’
  51. 51. 51© Copyright 2013 Pivotal. All rights reserved. Agenda Ÿ  Introduction Ÿ  GPDB Architecture Overview Ÿ  Distribution and Partitioning Ÿ  Loading External Data Ÿ  Maintenance Procedures Ÿ  Performance Considerations Ÿ  Competitive
  52. 52. 52© Copyright 2013 Pivotal. All rights reserved. GPDB Data Loading Options Loading Method Common Uses Examples INSERTS •  Operational Workloads •  OBDC/JDBC Interfaces INSERT INTO performers (name, specialty) VALUES (‘Sinatra’, ‘Singer’); COPY •  Quick and easy data in •  Legacy PostgreSQL applications •  Output sample results from SQL statements COPY performers FROM ‘/tmp/comedians.dat’ WITH DELIMITER ‘|’; External Tables •  High speed bulk loads •  Parallel loading using gpfdist protocol •  Local file, remote file, executable or HTTP based sources INSERT INTO craps_bets SELECT g.bet_type , g.bet_dttm , g.bt_amt FROM x_allbets b JOIN games g ON ( g.id = b.game_id ) WHERE g.name = ‘CRAPS’; GPLOAD •  Simplifies external table method (YAML wrapper ) •  Supports Insert, Merge & Update gpload –f blackjack_bets.yml
  53. 53. 53© Copyright 2013 Pivotal. All rights reserved. Example Load Architectures Master Host Segment Host Segment Host Segment Host ETL Host Data file Data file Data file Data file Data file Data file gpdfdist gpdfdist ETL Host Data file Data file Data file Data file Data file Data file gpdfdist gpdfdist Segment Instance Segment Instance Segment Instance Segment Instance Segment Instance Segment Instance Master Instance Segment Host Segment Instance Segment Instance Singleton INSERT statement COPY statement INSERT via external table or gpload
  54. 54. 54© Copyright 2013 Pivotal. All rights reserved. External Tables Ÿ  Access external files as if they were regular database tables Ÿ  Used with gpfdist provides full parallelism to load or unload data Ÿ  Query using SQL Ÿ  Create views for external tables Ÿ  Readable external tables for loading data –  Perform common ETL tasks Ÿ  Writeable external tables for unloading data –  Select data from database table to insert into writeable external table –  Send data to a data stream
  55. 55. 55© Copyright 2013 Pivotal. All rights reserved. File Based External Tables Ÿ  Specify format of input files –  FORMAT clause Ÿ  Specify location of external data sources (URIs) Ÿ  Specify protocol to access external data sources –  gpfdist ▪  Provides the best performance ▪  Segments access external files in parallel up to the value of gp_external_max_segments (Default 64) –  gpfdists ▪  Secure version of gpfdist –  file:// ▪  Segments access external files in parallel based on the number of URIs
  56. 56. 56© Copyright 2013 Pivotal. All rights reserved. Web Based External Tables Ÿ  Command based –  Output of shell command or scripts defines web table data –  EXECUTE command Ÿ  URL based –  Accesses data on a web server using HTTP protocol –  Web data files must reside on a web server that segment hosts can access
  57. 57. 57© Copyright 2013 Pivotal. All rights reserved. Load Using Regular External Tables Ÿ  File based (flat files) –  gpfdist provides the best performance =# CREATE EXTERNAL TABLE ext_expenses (name text, date date, amount float4, category text, description text) LOCATION ( ‘gpfdist://etlhost:8081/*.txt’, ‘gpfdst://etlhost:8082/*.txt’) FORMAT ’TEXT' (DELIMITER ‘|’ ); $ gpfdist –d /var/load_files1/expenses –p 8081 –l /home/gpadmin/log1 & $ gpfdist –d /var/load_files2/expenses –p 8082 –l /home/gpadmin/log2 &
  58. 58. 58© Copyright 2013 Pivotal. All rights reserved. Load Using External Web Tables Ÿ  Shell command or script based =# CREATE EXTERNAL WEB TABLE log_output (linenum int, message text) EXECUTE '/var/load_scripts/get_log_data.sh' ON HOST FORMAT 'TEXT' (DELIMITER '|'); Ÿ  URL based =# CREATE EXTERNAL WEB TABLE ext_expenses (name text, date date, amount float4, category text, description text) LOCATION ( 'http://intranet.company.com/expenses/sales/file.csv’, ) FORMAT 'CSV' ( HEADER );
  59. 59. 59© Copyright 2013 Pivotal. All rights reserved. COPY Ÿ  Quick and easy Ÿ  Recommended for small loads –  Not recommended for bulk loads Ÿ  Load from file or standard input Ÿ  Is not parallel uses a single process on the master –  Can improve performance by running multiple COPY commands concurrently –  Data must be divided across all concurrent processes Ÿ  Source file must be accessible by the master
  60. 60. 60© Copyright 2013 Pivotal. All rights reserved. GPLOAD Ÿ  Interface to readable external tables –  Invokes gpfdist for parallel loading Ÿ  Creates external table based on source data defined Ÿ  Uses load specification defined in a YAML formatted control file –  INPUT ▪  Hosts, ports, file structure –  OUTPUT ▪  Target Table ▪  MODES: INSERT, UPDATE, MERGE ▪  BEFORE & AFTER SQL statements
  61. 61. 61© Copyright 2013 Pivotal. All rights reserved. Agenda Ÿ  Introduction Ÿ  GPDB Architecture Overview Ÿ  Distribution and Partitioning Ÿ  Loading External Data Ÿ  Maintenance Procedures Ÿ  Performance Considerations Ÿ  Competitive
  62. 62. 62© Copyright 2013 Pivotal. All rights reserved. ANALYZE and Database Statistics Ÿ  Updated statistics are critical for the Query Planner to generate optimal query plans –  When a table is analyzed table information about the data is stored into system catalog tables Ÿ  Always run ANALYZE after loading data Ÿ  Run ANALYZE after INSERT, UPDATE and DELETE operations that significantly changes the underlying data Ÿ  The gp_autostats_on_change_threshold can be used in conjunction with gp_autostats_mode to auto analyze during these operations
  63. 63. 63© Copyright 2013 Pivotal. All rights reserved. • For very large tables it may not be feasible to run ANALYZE on the entire table • ANALYZE may be performed for specific columns • Run ANALYZE for –  Columns used in a JOIN condition –  Columns used in a WHERE clause –  Columns used in a SORT clause –  Columns used in a GROUP BY or HAVING Clause ANALYZE [table [ (column [, ...] ) ]]
  64. 64. 64© Copyright 2013 Pivotal. All rights reserved. VACUUM Ÿ  VACUUM reclaims physical space on disk from deleted or updated rows or aborted load/insert operations Ÿ  VACUUM collects table-level statistics such as the number of rows and pages Ÿ  Run VACUUM after –  Large DELETE operations –  Large UPDATE operations –  Failed load operations
  65. 65. 65© Copyright 2013 Pivotal. All rights reserved. Free Space Map Ÿ  Expired rows are tracked in the free space map Ÿ  Free space map size must be large enough to hold all expired rows Ÿ  VACUUM can not reclaim space occupied by expired rows that overflow the free space map –  VACUUM FULL reclaims all expired rows space ▪  Is an expensive operation ▪  Takes an exceptionally long time to finish Ÿ  max_fsm_pages Ÿ  max_fsm_relations
  66. 66. 66© Copyright 2013 Pivotal. All rights reserved. Agenda Ÿ  Introduction Ÿ  GPDB Architecture Overview Ÿ  Distribution and Partitioning Ÿ  Loading External Data Ÿ  Maintenance Procedures Ÿ  Performance Considerations Ÿ  Competitive
  67. 67. 67© Copyright 2013 Pivotal. All rights reserved. GPDB Optimization and Performance Tuning Ÿ  Review optimization, tuning and best practices provided in the applicable topic modules within this immersion –  Distributions –  Partitioning –  Storage Orientation –  Compression –  Indexes –  Loading –  ANALYZE –  Query Plans –  VACUUM
  68. 68. 68© Copyright 2013 Pivotal. All rights reserved. Data Types and Byte Alignment Ÿ  Lay out columns in heap tables as follows –  8 byte first (bigint, timestamp) –  4 byte next (int, date) –  2 byte last (smallint) Ÿ  Put distribution and partition columns up front –  Two 4 byte columns = an 8 byte column Ÿ  For example Int, Bigint, Timestamp, Bigint, Timestamp, Int (distribution key), Date (partition key), Bigint, Smallint --> Int (distribution key), Date (partition key), Bigint, Bigint, Timestamp, Bigint, Timestamp, Int, Smallint
  69. 69. 69© Copyright 2013 Pivotal. All rights reserved. Set Based versus Row Based Ÿ  PL/SQL and other procedural languages utilize cursors to operate on one record at a time –  Typically, these programs are written by programmers not database experts –  Looping over a set of records returned by a query results in an additional query plan per record Ÿ  GPDB performs better when dealing with operations over a set of records –  1 query plan for the whole set of records versus 1 main query plan + 1 per row
  70. 70. 70© Copyright 2013 Pivotal. All rights reserved. Agenda Ÿ  Introduction Ÿ  GPDB Architecture Overview Ÿ  Distribution and Partitioning Ÿ  Loading External Data Ÿ  Maintenance Procedures Ÿ  Performance Considerations Ÿ  Competitive
  71. 71. 71© Copyright 2013 Pivotal. All rights reserved. Competitive Solutions GPDB TD VERT EXA NZ Using as ETL engine
  72. 72. 72© Copyright 2013 Pivotal. All rights reserved. Competitive Solutions GPDB TD VERT EXA NZ Using as ETL engine Using as BI datasource
  73. 73. 73© Copyright 2013 Pivotal. All rights reserved. Competitive Solutions GPDB TD VERT EXA NZ Using as ETL engine Using as BI datasource Extensibility
  74. 74. 74© Copyright 2013 Pivotal. All rights reserved. Competitive Solutions GPDB TD VERT EXA NZ Using as ETL engine Using as BI datasource Extensibility Openness
  75. 75. 75© Copyright 2013 Pivotal. All rights reserved. Competitive Solutions GPDB TD VERT EXA NZ Using as ETL engine Using as BI datasource Extensibility Openness HW Flexibility
  76. 76. 76© Copyright 2013 Pivotal. All rights reserved. Competitive Solutions GPDB TD VERT EXA NZ Using as ETL engine Using as BI datasource Extensibility Openness HW Flexibility TCO
  77. 77. 77© Copyright 2013 Pivotal. All rights reserved. Competitive Solutions GPDB TD VERT EXA NZ Using as ETL engine Using as BI datasource Extensibility Openness HW Flexibility TCO Vendor Forus
  78. 78. 78© Copyright 2013 Pivotal. All rights reserved. Summary Ÿ  Greenplum is the first open source MPP database Ÿ  With over 400 enterprise customers Ÿ  Representing more than 10 years of development Ÿ  With great performance and scalability Ÿ  And great extensibility and analytical capabilities Ÿ  Community is yet young – feel free to contribute!
  79. 79. BUILT FOR THE SPEED OF BUSINESS

    Be the first to comment

    Login to see the comments

  • ssuser5d3a9a

    Mar. 23, 2016
  • tragicjun

    Apr. 6, 2016
  • kevinhaferkamp

    Nov. 4, 2016
  • lojunren

    Jun. 6, 2017
  • KaushikKumar17

    Jun. 23, 2017
  • hippoo

    Dec. 4, 2018
  • ssuser2407ba

    Jun. 24, 2019
  • JunshanHe

    May. 29, 2020
  • ufhkevin

    Dec. 23, 2020

This is the presentation I delivered on Hadoop User Group Ireland meetup in Dublin on Nov 28 2015. It covers at glance the architecture of GPDB and most important its features. Sorry for the colors - Slideshare is crappy with PDFs

Views

Total views

5,190

On Slideshare

0

From embeds

0

Number of embeds

321

Actions

Downloads

543

Shares

0

Comments

0

Likes

9

×