2. Who are you?
• Masahiro Nakagawa
• github: @repeatedly
• Treasure Data Inc.
• Fluentd / td-agent developer
• https://jobs.lever.co/treasure-data
• I love OSS :)
• D Language, MessagePack, The organizer of several meetups, etc…
13. time code method
2015-12-01 10:02:36 200 GET
2015-12-01 10:22:09 404 GET
2015-12-01 10:36:45 200 GET
2015-12-01 10:49:21 200 POST
… … …
• Good data format for analytics workload
• Read only selected columns, efficient compression
• Not good for insert / update
Columnar Storage
time code method
2015-12-01 10:02:36 200 GET
2015-12-01 10:22:09 404 GET
2015-12-01 10:36:45 200 GET
2015-12-01 10:49:21 200 POST
… … …
Row Columnar
Unit
Unit
15. No silver bullet
• Performance depends on data modeling and query
• distkey and sortkey are important
• should reduce data transfer and IO Cost
• query should take advantage of these keys
• There are some problems
• Cluster scaling, metadata management, etc…
16. Performance is good :)
But we often want to change schema
for new workloads. Now,
hard to maintain schema and its data…
L
C C C
18. Schema on Write(RDBMS)
• Writing data using schema
for improving query performance
• Pros:
• minimum query overhead
• Cons:
• Need to design schema and workload before
• Data load is expensive operation
19. Schema on Read(Hadoop)
• Writing data without schema and
map schema at query time
• Pros:
• Robust over schema and workload change
• Data load is cheap operation
• Cons:
• High overhead at query time
20. Data Lake
• Schema management is hard
• Volume is increasing and format is often changed
• There are lots of log types
• Feasible approach is storing raw data and
converting it before analyze data
• Data Lake is a single storage for any logs
• Note that no clear definition for now
21. Data Lake Patterns
• Use DFS, e.g. HDFS, for log storage
• ETL or data processing by Hadoop ecosystem
• Can convert logs via ingestion tools before
• Use Data Lake storage and related tools
• These storages support Hadoop ecosystem
22. Apache Hadoop
• Distributed computing framework
• First implementation based on Google MapReduce
http://hortonworks.com/hadoop-tutorial/introducing-apache-hadoop-developers/
26. Apache Tez
• Low level framework for YARN Applications
• Hive, Pig, new query engine and more
• Task and DAG based processing flow
ProcessorInput Output
Task DAG
27. MapReduce vs Tez
MapReduce Tez
M
HDFS
R
R
M M
HDFS HDFS
R
M M
R
M M
R
M
R
M MM
M M
R
R
R
SELECT g1.x, g2.avg, g2.cnt
FROM (SELECT a.x AVERAGE(a.y) AS avg FROM a GROUP BY a.x) g1
JOIN (SELECT b.x, COUNT(b.y) AS avg FROM b GROUP BY b.x) g2 ON (g1.x = g2.x) ORDER BY avg;
GROUP b BY b.xGROUP a BY a.x
JOIN (a, b)
ORDER BY
GROUP BY x
GROUP BY a.x
JOIN (a, b)
ORDER BY
http://www.slideshare.net/Hadoop_Summit/w-235phall1pandey/9
28. Superstition
• HDFS and YARN have SPOF
• Recent version doesn’t have SPOF on both
MapReduce 1 and MapReduce 2
• Can’t build from a scratch
• Really? Treasure Data builds Hadoop on CircleCI.
Cloudera, Hortonworks and MapR too.
• They also check its dependent toolchain.
29. Which Hadoop package
should we use?
• Distribution by Hadoop distributor is better
• CDH by Cloudera
• HDP by Hortonworks
• MapR distribution by MapR
• If you are familiar with Hadoop and its ecosystem,
Apache community edition becomes an option.
• For example, Treasure Data has patches and
they want to use patched version.
31. Ingestion tools
• There are two execution model!
• Bulk load:
• For high-throughput
• Almost tools transfer data in batch and parallel
• Streaming load:
• For low-latency
• Almost tools transfer data in micro-batch
32. Bulk load tools
• Embulk
• Pluggable bulk data loader for
various inputs and outputs
• Write plugins using Java and JRuby
• Sqoop
• Data transfer between Hadoop and RDBMS
• Included in some distributions
• Or each bulk loader for each data store
33. Streaming load tools
• Fluentd
• Pluggable and json based streaming collector
• Lots of plugins in rubygems
• Flume
• Mainly for Hadoop ecosystem, HDFS, HBase, …
• Included in some distributions
• Or Logstash, Heka, Splunk and etc…
37. MPP query engine
• It doesn’t have own storage unlike parallel RDBMS
• Follow “Schema on Read” approach
• data distribution depends on backend
• data schema also depends on backend
• Some products are called “SQL on Hadoop”
• Presto, Impala, Apache Drill, etc…
• It has own execution engine, not use MapReduce.
38. • Distributed Query Engine for interactive queries
against various data sources and large data.
• Pluggable connector for joining multiple backends
• You can join MySQL and HDFS data in one query
• Lots of useful functions for data analytics
• window functions, approximate query,
machine learning, etc…
40. HDFS
Hive
Daily/Hourly Batch
Interactive query
✓ Less scalable
✓ Extra cost
Commercial
BI Tools
Dashboard
✓ More work to manage
2 platforms
✓ Can’t query against
“live” data directly
Batch analysis platform Visualization platform
PostgreSQL, etc.
44. Execution Model
All stages are pipe-lined
✓ No wait time
✓ No fault-tolerance
MapReduce Presto
map map
reduce reduce
task task
task task
task
task
memory-to-memory
data transfer
✓ No disk IO
✓ Data chunk must
fit in memory
task
disk
map map
reduce reduce
disk
disk
Write data
to disk
Wait between
stages
45. Okay, we have now low latency
and batch combination.
Raw data
53. Push vs Pull
• Push:
• Easy to transfer data to multiple destinations
• Hard to control stream ratio in multiple streams
• Pull:
• Easy to control stream ratio
• Should manage consumers correctly
56. Amazon Redshift
• Parallel RDBMS on AWS
• Re-use traditional Parallel RDMBS know-how
• Scale is easier than traditional systems
• With Amazon EMR is popular
1. Store data into S3
2. EMR processes S3 data
3. Load processed data into Redshift
• EMR has Hadoop ecosystem
58. Google BigQuery
• Distributed query engine and scalable storage
• Tree model, Columnar storage, etc…
• Separate storage from workers
• High performance query by Google infrastructure
• Lots of workers
• Storage / IO layer on Colossus
• Can’t manage Parallel RDBMS properties like distkey,
but it works on almost cases.
61. Treasure Data
• Cloud based end-to-end data analytics service
• Hive, Presto, Pig and Hivemall for one big repository
• Lots of ingestion and output way, scheduling, etc…
• No stream processing for now
• Service concept is Data Lake
• JSON based schema-less storage
• Execution model is similar to BigQuery
• Separate storage from workers
• Can’t specify Parallel RDBMS properties
63. Resource Model Trade-off
Pros Cons
Fully Guaranteed
Stable execution
Easy to control resource
Non boost mechanizm
Guaranteed with
multi-tenanted
Stable execution
Good scalability
less controllable resource
Fully multi-tenanted
Boosted performance
Great scalability
Unstable execution
64. MS Azure also has useful services:
DataHub, SQL DWH, DataLake,
Stream Analytics, HDInsight…
65. Use service or build a platform?
• Should consider using service first
• AWS, GCP, MS Azure, Treasure Data, etc…
• Important factor is data analytics, not platform
• Do you have enough resources to maintain it?
• If specific analytics platform is a differentiator,
building a platform is better
• Use state-of-the-art technologies
• Hard to implement on existing platforms
66. Conclusion
• Many softwares and services for data analytics
• Lots of trade-off, performance, complexity,
connectivity, execution model, etc
• SQL is a primary language on data analytics
• Should focus your goal!
• data analytics platform is your business core?
If not, consider using services first.
69. Apache Spark
• Another Distributed computing framework
• Mainly for in-memory computing with DAG
• RDD and DataFrame based clean API
• Combination with Hadoop is popular
http://slidedeck.io/jmarin/scala-talk
70. Apache Flink
• Streaming based execution engine
• Support batch and pipelined processing
• Hadoop and Spark are batch based
•
https://ci.apache.org/projects/
flink/flink-docs-master/
71. Batch vs Pipelined
All stages are pipe-lined
✓ No wait time
✓ fault-tolerance with
check pointing
Batch(Staged) Pipelined
task task
task task
task
task
memory-to-memory
data transfer
✓ use disk if needed
task
disk
disk
Wait between
stagestask
task task
task task
task task stage3
stage2
stage1
72. Visualization
• Tableau
• Popular BI tool in many area
• Awesome GUI, easy to use, lots of charts, etc
• Metric Insights
• Dashboard for many metrics
• Scheduled query, custom handler, etc
• Chartio
• Cloud based BI tool
73. How to manage job dependency?
We want to issue Job X
after Job A and Job B are finished.
74. Data pipeline tool
• There are some important features
• Manage job dependency
• Handle job failure and retry
• Easy to define topology
• Separate tasks into sub-tasks
• Apache Oozie, Apache Falcon, Luigi, Airflow, JP1,
etc…
75. Luigi
• Python module for building job pipeline
• Write python code and run it.
• task is defined as Python class
• Easy to manage by VCS
• Need some extra tools
• scheduled job, job hisotry, etc…
class T1(luigi.task):
def requires(self):
# dependencies
def output(self):
# store result
def run(self):
# task body
76. Airflow
• Python and DAG based workflow
• Write python code but it is for defining ADAG
• Task is defined by Operator
• There are good features
• Management web UI
• Task information is stored into database
• Celery based distributed execution
dag = DAG('example')
t1 = Operator(..., dag=dag)
t2 = Operator(..., dag=dag)
t2.set_upstream(t1)