The document discusses AWS Glue, a fully managed ETL service. It provides an overview of Glue's programming environment and data processing model. It then gives several examples of optimizing Glue job performance, including processing many small files, a few large files, optimizing parallelism with JDBC partitions, Python performance, and using the new Python shell job type.
11. AWS Glue
Serverless data catalog & ETL service
Data Catalog
ETL Job
authoring
Discover data and
extract schema
Auto-generates
customizable ETL code
in Python and Scala
Automatically discovers data and stores schema
Data searchable, and available for ETL
Generates customizable code
Schedules and runs your ETL jobs
Serverless, flexible, and built on open standards
12. Putting it together - data lake with AWS Glue
Amazon S3
(Raw data)
Amazon S3
(Staging
data)
Amazon S3
(Processed
data)
AWS Glue Data Catalog
Crawlers Crawlers Crawlers
14. AWS Glue
Serverless data catalog & ETL service
Data Catalog
ETL Job
authoring
Discover data and
extract schema
Auto-generates
customizable ETL code
in Python and Scala
Automatically discovers data and stores schema
Data searchable, and available for ETL
Generates customizable code
Schedules and runs your ETL jobs
Serverless, flexible, and built on open standards
20. Basics of ETL Job Programming
1. Initialize
2. Read
3. Transform data
4. Write
## Initialize
glueContext = GlueContext(SparkContext.getOrCreate())
## Create DynamicFrame and retrieve data from source
ds0 = glueContext.create_dynamic_frame.from_catalog (
database = "mysql", table_name = "customer",
transformation_ctx = "ds0")
## Implement data transformation here
ds1 = ds0 ...
## Write DynamicFrame from Catalog
ds2 = glueContext.write_dynamic_frame.from_catalog (
frame = ds1, database = "redshift",
table_name = "customer_dim",
redshift_tmp_dir = args["TempDir"],
transformation_ctx = "ds2")
21. What is Apache Spark?
Parallel, scale-out data processing engine
Fault-tolerance built-in
Flexible interface: Python scripting, SQL
Rich eco-system: ML, Graph, analytics, โฆ
Apache Spark and AWS Glue ETL
Spark core: RDDs
SparkSQL
Dataframes DynamicFrames
AWS Glue ETL
AWS Glue ETL libraries
Integration: Data Catalog, job orchestration,
code-generation, job bookmarks, S3, RDS
ETL transforms, more connectors & formats
New data structure: DynamicFrames
22. Dataframes
Core data structure for SparkSQL
Like structured tables
Need schema up-front
Each row has same structure
Suited for SQL-like analytics
Dataframes and Dynamic Frames
Dynamic Frames
Like dataframes for ETL
Designed for processing semi-structured data,
e.g. JSON, Avro, Apache logs ...
23. Public GitHub timeline is โฆ
35+ event types
semi-structured
payload structure
and size varies by
event type
24. ยฉ 2017, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
schema per-record, no up-front schema needed
Easy to restructure, tag, modify
Can be more compact than dataframe rows
Many flows can be done in single-pass
ยง {โidโ:โ2489โ, โtypeโ:
โCreateEventโ,
โpayloadโ: {โcreatorโ:โฆ}, โฆ}
Dynamic Records
typeid typeid
Dynamic Frame Schema
typeid
Dynamic Frame internals
{โidโ:4391, โtypeโ: โPullEventโ,
โpayloadโ: {โassetsโ:โฆ}, โฆ}
typeid
{โidโ:โ6510โ, โtypeโ: โPushEventโ,
โpayloadโ: {โpusherโ:โฆ}, โฆ}
id
25. ResolveChoice() B B B
project
B
cast
B
separate into cols
B B
ApplyMapping() A
X Y
A X Y
C
15+ transforms out-of-the box
Dynamic Frame transforms
26. Semi-structured schema Relational schema
FKA B B C.X C.Y
PK ValueOffset
A C D [ ]
X Y
B B
Transforms and adds new columns, types, and tables on-the-fly
Tracks keys and foreign keys across runs
SQL on the relational schema is orders of magnitude faster than JSON processing
Relationalize() transform
27. toDF(): Convert to a Dataframe
fromDF(): Convert from a Dataframe
Spigot(): Sample data of any Dynamic Frame to S3
Unbox(): Parse string column as given format into Dynamic Frame
Filter(), Map(): Apply Python UDFs to Dynamic Frames
Join(): Join two Dynamic Frames
And more โฆ.
Useful AWS Glue transforms
28. 0
200
400
600
800
1000
1200
1400
1600
1800
Day Month Year
GitHub Timeline ETL Performance
DynamicFrames DataFrames
Time(sec)
On average: 2x performance
improvement
Data size (# files)
24 744 8699
Performance: AWS Glue ETL
Configuration
10 DPUs
Apache Spark 2.1.1
Workload
JSON to CSV
Filter for Pull events
(lower is better)
29. Lots of small files, e.g. Kinesis Firehose
Vanilla Apache Spark (2.1.1) overheads
Must reconstruct partitions (2-pass)
Too many tasks: task per file
Scheduling & memory overheads
AWS Glue Dynamic Frames
Integration with Data Catalog
Automatically group files per task
Rely on crawler statistics
Performance: Lots of small files
0
1000
2000
3000
4000
5000
6000
7000
8000
1:2K 20:40K 40:80K 80:160K 160:320K 320:640K 640: 1280K
AWS Glue ETL small file scalability
Spark Glue
1.2 Million Files
Spark
Out-Of-Memory
>= 320: 640K files
Grouping
Time(sec)
# partitions : # files
30. AWS Glue execution model: data partitions
โข Apache Spark and AWS Glue
are data parallel.
โข Data is divided into partitions
that are processed
concurrently.
โข A stage is a set of parallel
tasks โ one task per partition
Driver
Executors Overall throughput is limited
by the number of partitions
34. AWS Glue execution model: jobs and stages
Repartition
FilterRead
Drop
Nulls
Write
Read Show
Job 1
Job 2
Stage 1
Stage 2
Stage 1
Apply
Mapping
Filter
Apply
Mapping Jobs
35. โข How is your dataset
partitioned?
โข How is your application
divided into jobs and
stages?
โข Data is divided into
partitions that are
processed concurrently
AWS Glue performance: key questions
38. Example: Processing lots of small files
โข Let's look at a straightforward JSON to Parquet conversion job
โข 1.28 million JSON files in 640 partitions:
48. Options for grouping files
โข groupFiles
โข inPartition: within a partition.
โข acrossPartition: from different partitions.
โข groupSize
โข Target size of each group.
52. Example: Processing a few large files
โข Let's see how this looks on a sample dataset of 5 large csv files.
โข Each file is
โข 12.5 GB uncompressed
โข 1.6 GB gzip
โข 1.3 GB bzip2
โข Script converts data to Parquet.
53. Example: Processing a few large gzip files
โข We only have 5 partitions โ one for each file.
โข Job fails after 2 hours.
54. Example: Processing a few large bzip2 files
โข Bzip2 files can be split into blocks, so we see up to 104 tasks.
โข Job completes in 18 minutes.
55. Example: Processing a few large bzip2 files
โข With 15 DPU, the number of active executors closely tracks the maximum needed
number of executors.
56. Example: Processing a few large uncompressed files
โข Uncompressed files can be split into lines, so we construct 64MB partitions.
โข Job completes in 12 minutes.
57. Example: Processing a few large files
โข If you have a choice of compression type, prefer bzip2.
โข If you are using gzip, make sure you have enough files to fully utilize your resources.
โข Bandwidth is rarely the bottleneck for AWS Glue jobs, so consider leaving files
uncompressed.
63. AWS Glue JDBC partitions
โข For JDBC sources, by default each table is read as a single partition.
โข AWS Glue automatically partitions datasets with fewer than 10
partitions after the data has been loaded.
66. Reading JDBC partitions
A single executor is used
for the JDBC query
Data is repartitioned for
the rest of the job.
67. Options for reading database tables in parallel
โข hashexpression โ Integer expression to use for distribution.
โข hashfield โ Single column to use for distribution.
โข hashpartitions โ Number of parallel queries to make. Default is 7.
โข Turns into a collection of queries of the form
68. Options for reading database tables in parallel
โข Guidelines for picking distribution keys.
โข For hashexpression, choose a column that is evenly distributed across values. A primary key works well.
โข If no such field exists, use hashfield to define one.
โข Example: The taxi dataset does not have a primary key, so we set hashfield to
partition based on day of the month:
datasource0 = glueContext.create_dynamic_frame.from_catalog(
database = "nyctaxi",
table_name = "green-mysql-large",
additional_options={'hashfield': 'day(lpep_pickup_datetime)',
'hashpartitions': 15})
69. Options for reading database tables in parallel
โข Four executors can process 16 partitions concurrently.
70. Options for reading database tables in parallel
โข Make sure to understand impact to database engine.
71. Job Bookmarks for JDBC Queries
โข Job bookmarks only work when the source table has an ordered
primary key.
โข Updates are not handled today.
73. Python performance
โข Using map and filter in Python
is expensive for large data sets.
โข All data is serialized and sent
between the JVM and Python.
โข Alternatives
โข Use AWS Glue Scala SDK.
โข Convert to DataFrame and use Spark
SQL expressions.
Spark JVM
Python VM