Exploring protein-protein interactions by Weak Affinity Chromatography (WAC) ...
Deep Dive on Amazon Redshift
1. Deep Dive on Amazon Redshift
Storage Subsystem and Query Life Cycle
Dhanraj Pondicherry,
Manager, Solutions Architecture
June 2017
2. Deep Dive Overview
• Amazon Redshift History and Development
• Cluster Architecture
• Concepts and Terminology
• Storage Deep Dive
• Design Considerations
• Query Life Cycle
• Loading Best Practices
• New & Upcoming Feature
• Open Q&A
13. Designed for I/O Reduction
• Columnar storage
• Data compression
• Zone maps
aid loc dt
1 SFO 2016-09-01
2 JFK 2016-09-14
3 SFO 2017-04-01
4 JFK 2017-05-14
• Accessing dt with row storage:
– Need to read everything
– Unnecessary I/O
aid loc dt
CREATE TABLE loft_deep_dive (
aid INT --audience_id
,loc CHAR(3) --location
,dt DATE --date
);
14. Designed for I/O Reduction
• Columnar storage
• Data compression
• Zone maps
aid loc dt
1 SFO 2016-09-01
2 JFK 2016-09-14
3 SFO 2017-04-01
4 JFK 2017-05-14
• Accessing dt with columnar storage:
– Only scan blocks for relevant
column
aid loc dt
CREATE TABLE loft_deep_dive (
aid INT --audience_id
,loc CHAR(3) --location
,dt DATE --date
);
15. Designed for I/O Reduction
• Columnar storage
• Data compression
• Zone maps
aid loc dt
1 SFO 2016-09-01
2 JFK 2016-09-14
3 SFO 2017-04-01
4 JFK 2017-05-14
• Columns grow and shrink independently
• Effective compression ratios due to like data
• Reduces storage requirements
• Reduces I/O
aid loc dt
CREATE TABLE loft_deep_dive (
aid INT ENCODE LZO
,loc CHAR(3) ENCODE BYTEDICT
,dt DATE ENCODE RUNLENGTH
);
16. Designed for I/O Reduction
• Columnar storage
• Data compression
• Zone maps
aid loc dt
1 SFO 2016-09-01
2 JFK 2016-09-14
3 SFO 2017-04-01
4 JFK 2017-05-14
aid loc dt
CREATE TABLE loft_deep_dive (
aid INT --audience_id
,loc CHAR(3) --location
,dt DATE --date
);
• In-memory block metadata
• Contains per-block MIN and MAX value
• Effectively prunes blocks which cannot
contain data for a given query
• Eliminates unnecessary I/O
17. SELECT COUNT(*) FROM LOGS WHERE DATE = '09-JUNE-2013'
MIN: 01-JUNE-2013
MAX: 20-JUNE-2013
MIN: 08-JUNE-2013
MAX: 30-JUNE-2013
MIN: 12-JUNE-2013
MAX: 20-JUNE-2013
MIN: 02-JUNE-2013
MAX: 25-JUNE-2013
Unsorted Table
MIN: 01-JUNE-2013
MAX: 06-JUNE-2013
MIN: 07-JUNE-2013
MAX: 12-JUNE-2013
MIN: 13-JUNE-2013
MAX: 18-JUNE-2013
MIN: 19-JUNE-2013
MAX: 24-JUNE-2013
Sorted By Date
Zone Maps
18. Terminology and Concepts: Data Sorting
• Goals:
• Physically order rows of table data based on certain column(s)
• Optimize effectiveness of zone maps
• Enable MERGE JOIN operations
• Impact:
• Enables rrscans to prune blocks by leveraging zone maps
• Overall reduction in block I/O
• Achieved with the table property SORTKEY defined over one or more columns
• Optimal SORTKEY is dependent on:
• Query patterns
• Data profile
• Business requirements
19. Terminology and Concepts: Slices
• A slice can be thought of like a “virtual compute node”
– Unit of data partitioning
– Parallel query processing
• Facts about slices:
– Each compute node has either 2, 16, or 32 slices
– Table rows are distributed to slices
– A slice processes only its own data
20. Data Distribution
• Distribution style is a table property which dictates how that table’s data is
distributed throughout the cluster:
• KEY: Value is hashed, same value goes to same location (slice)
• ALL: Full table data goes to first slice of every node
• EVEN: Round robin
• Goals:
• Distribute data evenly for parallel processing
• Minimize data movement during query processing
KEY
ALL
Node 1
Slice 1 Slice 2
Node 2
Slice 3 Slice 4
Node 1
Slice 1 Slice 2
Node 2
Slice 3 Slice 4
Node 1
Slice 1 Slice 2
Node 2
Slice 3 Slice 4
EVEN
21. Data Distribution: Example
CREATE TABLE loft_deep_dive (
aid INT --audience_id
,loc CHAR(3) --location
,dt DATE --date
) DISTSTYLE (EVEN|KEY|ALL);
CN1
Slice 0 Slice 1
CN2
Slice 2 Slice 3
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
22. Data Distribution: EVEN Example
CREATE TABLE loft_deep_dive (
aid INT --audience_id
,loc CHAR(3) --location
,dt DATE --date
) DISTSTYLE EVEN;
CN1
Slice 0 Slice 1
CN2
Slice 2 Slice 3
INSERT INTO loft_deep_dive VALUES
(1, 'SFO', '2016-09-01'),
(2, 'JFK', '2016-09-14'),
(3, 'SFO', '2017-04-01'),
(4, 'JFK', '2017-05-14');
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Rows: 0 Rows: 0 Rows: 0 Rows: 0
(3 User Columns + 3 System Columns) x (4 slices) = 24 Blocks (24MB)
Rows: 1 Rows: 1 Rows: 1 Rows: 1
23. Data Distribution: KEY Example #1
CREATE TABLE loft_deep_dive (
aid INT --audience_id
,loc CHAR(3) --location
,dt DATE --date
) DISTSTYLE KEY DISTKEY (loc);
CN1
Slice 0 Slice 1
CN2
Slice 2 Slice 3
INSERT INTO loft_deep_dive VALUES
(1, 'SFO', '2016-09-01'),
(2, 'JFK', '2016-09-14'),
(3, 'SFO', '2017-04-01'),
(4, 'JFK', '2017-05-14');
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Rows: 2 Rows: 0 Rows: 0
(3 User Columns + 3 System Columns) x (2 slices) = 12 Blocks (12MB)
Rows: 0Rows: 1
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Rows: 2Rows: 0Rows: 1
24. Data Distribution: KEY Example #2
CREATE TABLE loft_deep_dive (
aid INT --audience_id
,loc CHAR(3) --location
,dt DATE --date
) DISTSTYLE KEY DISTKEY (aid);
CN1
Slice 0 Slice 1
CN2
Slice 2 Slice 3
INSERT INTO loft_deep_dive VALUES
(1, 'SFO', '2016-09-01'),
(2, 'JFK', '2016-09-14'),
(3, 'SFO', '2017-04-01'),
(4, 'JFK', '2017-05-14');
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Rows: 0 Rows: 0 Rows: 0 Rows: 0
(3 User Columns + 3 System Columns) x (4 slices) = 24 Blocks (24MB)
Rows: 1 Rows: 1 Rows: 1 Rows: 1
25. Data Distribution: ALL Example
CREATE TABLE loft_deep_dive (
aid INT --audience_id
,loc CHAR(3) --location
,dt DATE --date
) DISTSTYLE ALL;
CN1
Slice 0 Slice 1
CN2
Slice 2 Slice 3
INSERT INTO loft_deep_dive VALUES
(1, 'SFO', '2016-09-01'),
(2, 'JFK', '2016-09-14'),
(3, 'SFO', '2017-04-01'),
(4, 'JFK', '2017-05-14');
Rows: 0 Rows: 0
(3 User Columns + 3 System Columns) x (2 slice) = 12 Blocks (12MB)
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Rows: 0Rows: 1Rows: 2Rows: 4Rows: 3
Table: loft_deep_dive
User Columns System Columns
aid loc dt ins del row
Rows: 0Rows: 1Rows: 2Rows: 4Rows: 3
26. Terminology and Concepts: Data Distribution
• KEY
– The key creates an even distribution of data
– Joins are performed between large fact/dimension tables
– Optimizing merge joins and group by
• ALL
– Small and medium size dimension tables (< 2-3M)
• EVEN
– When key cannot produce an even distribution
28. Storage Deep Dive: Disks
• Redshift utilizes locally attached storage devices
• Compute nodes have 2.5-3x the advertised storage capacity
• 1, 3, 8, or 24 disks depending on node type
• Each disk is split into two partitions
– Local data storage, accessed by local CN
– Mirrored data, accessed by remote CN
• Partitions are raw devices
– Local storage devices are ephemeral in nature
– Tolerant to multiple disk failures on a single node
29. Storage Deep Dive: Blocks
• Column data is persisted to 1MB immutable blocks
• Each block contains in-memory metadata:
– Zone Maps (MIN/MAX value)
– Location of previous/next block
• Blocks are individually compressed with 1 of 10 encodings
• A full block contains between 16 and 8.4 million values
30. Storage Deep Dive: Columns
• Column: Logical structure accessible via SQL
• Physical structure is a doubly linked list of blocks
• These blockchains exist on each slice for each column
• All sorted & unsorted blockchains compose a column
• Column properties include:
– Distribution Key
– Sort Key
– Compression Encoding
• Columns shrink and grow independently, 1 block at a time
• Three system columns per table-per slice for MVCC
31. Block Properties: Design Considerations
• Small writes:
• Batch processing system, optimized for processing massive amounts of data
• 1MB size + immutable blocks means that we clone blocks on write so as not to
introduce fragmentation
• Small write (~1-10 rows) has similar cost to a larger write (~100 K rows)
• UPDATE and DELETE:
• Immutable blocks means that we only logically delete rows on UPDATE or DELETE
• Must VACUUM or DEEP COPY to remove ghost rows from table
32. Column Properties: Design Considerations
• Compression:
• COPY automatically analyzes and compresses data when loading into empty tables
• ANALYZE COMPRESSION checks existing tables and proposes optimal
compression algorithms for each column
• Changing column encoding requires a table rebuild
• DISTKEY and SORTKEY significantly influence performance (orders of magnitude)
• Distribution Keys:
• A poor DISTKEY can introduce data skew and an unbalanced workload
• A query completes only as fast as the slowest slice completes
• Sort Keys:
• A sortkey is only effective as the data profile allows it to be
• Selectivity needs to be considered
34. Storage Deep Dive: Slices
• Each compute node has either 2, 16, or 32 slices
• A slice can be thought of like a “virtual compute node”
– Unit of data partitioning
– Parallel query processing
• Facts about slices:
– Table rows are distributed to slices
– A slice processes only its own data
– Within a compute node all slices read from and write to all disks
37. Query Execution Terminology
• Step: An individual operation needed during query execution. Steps need to be
combined to allow compute nodes to perform a join. Examples: scan, sort,
hash, aggr
• Segment: A combination of several steps that can be done by a single process.
The smallest compilation unit executable by a slice. Segments within a stream
run in parallel.
• Stream: A collection of combined segments which output to the next stream or
SQL client.
39. client
JDBC ODBC
Leader Node
Parser
Query Planner
Code Generator
Final Computations
Generate code for
all segments of
one stream
Explain Plans
Compute Node
Receive Compiled Code
Run the Compiled Code
Return results to Leader
Compute Node
Receive Compiled Code
Run the Compiled Code
Return results to Leader
Return results to client
Segments in a stream are
executed concurrently.
Each step in a segment is
executed serially.
Query Lifecycle
40. Query Execution Deep Dive: Leader Node
1. The leader node receives the query and parses the SQL.
2. The parser produces a logical representation of the original query.
3. This query tree is input into the query optimizer (volt).
4. Volt rewrites the query to maximize its efficiency. Sometimes a single query will be
rewritten as several dependent statements in the background.
5. The rewritten query is sent to the planner which generates >= 1 query plans for the
execution with the best estimated performance.
6. The query plan is sent to the execution engine, where it’s translated into steps,
segments, and streams.
7. This translated plan is sent to the code generator, which generates a C++ function
for each segment.
8. This generated C++ is compiled with gcc to a .o file and distributed to the compute
nodes.
41. Query Execution Deep Dive: Compute Nodes
• Slices execute the query segments in parallel.
• Executable segments are created for one stream at a time. When the segments
of that stream are complete, the engine generates the segments for the next
stream.
• When the compute nodes are done, they return the query results to the leader
node for final processing.
• The leader node merges the data into a single result set and addresses any
needed sorting or aggregation.
• The leader node then returns the results to the client.
44. Parallelism considerations with Redshift slices
DS2.8XL Compute Node
• Ingestion Throughput:
– Each slice’s query processors can load one file at a time:
• Streaming decompression
• Parse
• Distribute
• Write
• Realizing only partial node usage as 6.25% of slices are active
0 2 4 6 8 10 12 141 3 5 7 9 11 13 15
45. Design considerations for Redshift slices
• Use at least as many input
files as there are slices in the
cluster
• With 16 input files, all slices
are working so you maximize
throughput
• COPY continues to scale
linearly as you add nodes
16 Input Files
DS2.8XL Compute Node
0 2 4 6 8 10 12 141 3 5 7 9 11 13 15
46. Data Preparation
• Export Data from Source System
– CSV Recommend (Simple Delimiter '|' or ’,')
• Be aware of UTF-8 varchar columns (UTF-8 take 4 bytes per char)
• Be aware of your NULL character (N)
– GZIP Compress Files
– Split Files (1MB – 1GB after gzip compression)
• Useful COPY Options for PoC Data
– MAXERRORS
– ACCEPTINVCHARS
– NULL AS
48. Fast @ exabyte scale Elastic & highly available On-demand, pay-per-query
High concurrency:
Multiple clusters access
same data
No ETL: Query data in-
place using open file
formats
Full Amazon Redshift
SQL support
S3
SQL
Enter Amazon Redshift Spectrum
Run SQL queries directly against data in S3 using thousands of nodes
49. 49
Analyze Subsets of Data Analyze ALL Available Data
Traditional Approach Redshift Spectrum Approach
Had to pick and choose which data you wanted to analyze
Analyze only the data that fits
in your data warehouse
Analyze any of the data in your
data lake
Paradigm Shift Enabled by Redshift Spectrum
50. Amazon Redshift Spectrum uses ANSI SQL
• Amazon Redshift Spectrum seamlessly integrates with your existing SQL & BI
apps
• Support for complex joins, nested queries & window functions
• Support for data partitioned in S3 by any key
Date, time, and any other custom keys
e.g., year, month, day, hour
52. Amazon Redshift Spectrum is secure
End-to-end
data encryption
Alerts &
notifications
Virtual private cloud
Audit logging
Certifications &
compliance
Encrypt S3 data using SSE and AWS
KMS
Encrypt all Amazon Redshift data
using KMS, AWS CloudHSM, or your
on-premises HSMs
Enforce SSL with perfect forward
encryption using ECDHE
Amazon Redshift leader node in
your VPC. Compute nodes in
private VPC. Redshift Spectrum
nodes in private VPC, store no
state.
Communicate event-specific
notifications via email, text
message, or call with Amazon SNS
All API calls are logged using AWS
CloudTrail
All SQL statements are logged
within Amazon Redshift
PCI/DSSFedRAMP
SOC1/2/3 HIPAA/BAA
53. “Redshift Spectrum will let us expand the universe
of the data we analyze to 100s of petabytes over
time. This is truly a game changer, and we can think
of no other system in the world that can get us
there.”
“Multiple teams can now query the same
Amazon S3 data sets using both Amazon
Redshift and Amazon EMR.”
“Redshift Spectrum will help us scale yet further
while also lowering our costs.”
“Redshift Spectrum’s fast performance across
massive data sets is unprecedented.”
“Redshift Spectrum enables us to directly operate on
our data in its native format in Amazon S3 with no
preprocessing or transformation.”
“Our data science team using Amazon EMR can now
collaborate with our marketing and product teams
using Redshift Spectrum to analyze the same Amazon
S3 data sets.”
Customers love Amazon Redshift Spectrum
54. Recently Released Features
QMR - Query Monitoring Rules
• Apply rules to inflight queries
New Data Type - TIMESTAMPTZ
• Support for Timestamp with Time zone : New TIMESTAMPTZ data type to input complete timestamp values that
include the date, the time of day, and a time zone.
Eg: 30 Nov 07:37:16 2016 PST
Multi-byte Object Names
• Support for Multi-byte (UTF-8) characters for tables, columns, and other database object names
User Connection Limits
• You can now set a limit on the number of database connections a user is permitted to have open concurrently
Automatic Data Compression for CTAS
• All newly created tables will leverage default encoding
New Column Encoding ZSTD
55. Recently Released Features
Performance Enhancements
• Vacuum (10x faster for deletes)
• Snapshot Restore (2x faster)
• Queries (Up to 5x faster)
Copy Can Extend Sorted Region on Single Sort Key
• No need to vacuum when loading in sorted order
Enhanced VPC Routing
• Restrict S3 Bucket Access
Schema Conversion Tool - One-Time Data Exports
• Oracle, Teradata, SQL Server
Schema Conversion Tool
• Vertica, SQL Server, Netezza and Greenplum
56. BI tools SQL clientsAnalytics tools
Client AWS
Redshift
ADFS
Corporate
Active Directory IAM
Amazon Redshift
ODBC/JDBC
User groups Individual user
Single Sign-On
Identity providers
New Redshift
ODBC/JDBC
drivers. Grab the
ticket (userid) and
get a SAML
assertion.
Coming Soon: IAM Authentication
57. Coming Soon: Lots More …
Automatic and Incremental Background VACUUM
• Reclaims space and sorts when Amazon Redshift clusters are idle
• Vacuum is initiated when performance can be enhanced
• Improves ETL and query performance
Short Query Bias
• Prioritize interactive short running queries
010101010101
58. Resources
• https://github.com/awslabs/amazon-redshift-utils
• https://github.com/awslabs/amazon-redshift-monitoring
• https://github.com/awslabs/amazon-redshift-udfs
• Admin scripts
Collection of utilities for running diagnostics on your cluster
• Admin views
Collection of utilities for managing your cluster, generating schema DDL, etc.
• ColumnEncodingUtility
Gives you the ability to apply optimal column encoding to an established schema with
data already loaded
• Amazon Redshift Engineering’s Advanced Table Design Playbook
https://aws.amazon.com/blogs/big-data/amazon-redshift-engineerings-advanced-table-design-playbook-preamble-prerequisites-and-
prioritization/
Standard connection management
Behind the leader node are Compute nodes! That’s where all the data sits. All these compute nodes execute snippets of code acting on the local data sets. Results are then sent to the leader node.
MPP architecture. – Main takeaway
Data compression – different compression setting for the column.
See example on the right. Encoding types
Zonemaps – in memory structures that store ”min and max” values for “blocks” column
Orange and white - represents block – 1M chunk of data for each column
When a query comes in, we check these zone maps to identify blocks that might contain relevant data. This helps us reduce IO before we hit the disks.
Blocks are a
Sorted columns enable fetching the minimum number of blocks required for query execution. In this example, an unsorted table al most leads to a full table scan O(N) and a sorted table leads to one block scanned O(1).
Observed that having the right sort key has a massive impact on effectiveness of ZoneMaps.
A lot of time customers will have time-series data. Querying between two timestamps. That’s an example of where zone-maps will help minimize I/O.
So when picking a sort key consider the “Query predicates or patterns”; Data profile and business requirements.
Concept we have called “Slices”; a virtual compute node
Remember we spoke about Leader nodes and compute nodes? Each compute node is further divided into “Slices”
Each slice has its own compute, storage and memory allocation!
It’s how we get Parallelism within each compute node.
“Key” - will go into it soon. Don’t go too deep.
Even
All
Some examples.
In summary, you need to know is that data distribution strategy has the highest impact on Redshift performance.
If you have large tables with high cardinality value column and gives you an even distribution w/o hotspots. If you have that, then look at other considerations
Merge joins is a kind of a special case where you might join large tables Especially if you have joins w/ other tables or using group by
Merge joins are a special case
Coming from other DW systems, you have to know that we have 2.5 to 3x more storage on the compute nodes.
Space we advertize is the usable space you get.
1MB immutable chunks of data stored column by column.
We offer encoding/compression types by column.
We already spoke about Zonemaps metadata for each blocks.
How do we keep track of a lot of blocks?
We call it blockchains. You can think of a blockchain as a collection of blocks for each column on each slice.
There’s two blockchains – one for sorted and for unsorted regions
ACID compliant – so we support MVCC.
3 other columns – rowid, deleted-id, transaction-id.
Optimized for Dwarehousing, it is optimized for processing massive amounts of data
So small writes are just as expensive as large writes. We have optimizations to reduce fragmentation.
What you need to know is that updates are essentially a delete combined with a insert at the end of the table.
So the deleted blocks have to removed w/ a mechanism called “Vaccum”
Largest impact on performace
Dist keys – highest impact of performance – if your data warehousing is not giving you the performance you need, look at this first
Next would be Sort keys
Those will give you an order of magnitude difference in performance.
Now we’ve built up the foundation of how storage works and all that stuff, lets look at how a query executes in Redshift
When you throw a query at Redshift, it first goes to a parser. Redshift then rewrites the query – striping out things like syntactic sugar, etc
Then it goes to the Planner
What you’d notice is that the first time you execute a query, it takes a few extra seconds.
That’s for the parsing, generating plans and picking the correct plan and then generating the code
Once we’ve done all that we cache the compiled code; persists across cluster reboots
GZIP is highly recommended – allows you to optimize the network transfer
25-April – Virginia, Ohio and Oregon. If you saw Werner run a complex query against
Separates Compute and Storage
Multiple clusters can access same data in S3
Limitless Concurrency
Supports querying against open data formats on S3
Join existing data in Redshift with data in S3, using a single query
Cost-savings : Offload less frequently used data to S3
Same SQL support, same security certifications (HIPAA, PCI, etc)
Nested sub queries
Mention that customers can leverage AWS KMS in addition to their HSMs
QMR – allows you to specify rules and limits around say ad-hoc queries and kill it if its going to end up w/ large cartesian joins.
Timestamptz - New Data type
Automatic Data Compression for CTAS
Connection limits
Added ZSTD compression type – created by facebook and is gaining in populartity
If adding data to a sorted table, you won’t have to run vacuum
1 – customers setup their AD and IdP environment, and register with IAM
2 – End-users leverage SSO with Corporate AD and IdP
3 – Redshift ODBC/JDBC drivers obtain SAML assertion from IdP
4 – Redshift drivers get temporary credentials
5 – Connection is established between client and Redshift