A very BIG data
Company
The challenge of serving
massive batch-computed
data sets on-line
The challenge of serving massive
batch-computed data sets online
David Gruzman
Serving batch-computed data
by David Gruzman
►
Today we will discuss the case
when we have multi-terabyte
dataset which is...
Similar Web data flow – the context
►
Company assemble billions of events from their panel on the daily
basis.
►
Fast grow...
How data is calculated
►
Data is imported into HDFS from the farm of application
servers.
►
Set of MR Jobs as well as Hive...
Abstract schema of the relevant part of
SimilarWeb IT
App
Server
s
Hadoop – Map Reduce
Hadoop – Hbase Stage
Hbase
Producti...
Hbase under heavy inserts
►
First of all – it do works
►
The question – what was done...
Hbase : Split storms
►
When you insert data evenly into many regions all of
them starts splitting roughly in the same time...
Compaction storms
►
Under heavy load to all regions – all of them
starting minor compaction in the same time
►
Results are...
Inherent problem – delayed work
►
Hbase does not do ALL work required during
insert.
►
Part of the work Delayed till the c...
Massive insert problem
►
There is a lot of overhead in randomly insert data.
►
What happens that MapReduce produce already...
Domino effect
HBase Snapshots – come to rescue
►
Snapshot is capability to get “point in time” state of the
table.
►
Technically snapsho...
Hbase – snapshot export
Region
Before 1 Before 2
File after
Snapshot
Before1
In Archive
Before2
in
archive
Move / rename
Hbase – snapshot export
►
There is additional capability of snapshots –
export.
►
Technically it is like DISTCP and even n...
So how snapshots help us?
►
As you remember SimilarWeb has several
Hbase clusters. One used as a company data-
warehouse a...
So we get to the following solution
App
Serv
ers
Hadoop – Map Reduce
Hadoop – Hbase Stage
Hbase
Production
Hbase
Productio...
Is it ideal?
►
We effectively minimized impact on Hbase
region servers
►
But we left with Hbase high availability problem
...
Conceptual problem
►
In production we do not need strong consistency
and we pay for it with Partition tolerance in CAP
the...
BigTable vs Dynamo
►
There are two kinds of NoSQLs – built after
BigTable (Hbase, Hypertable) and after
Dynamo (Cassandra,...
Evaluation process
►
We decided to do research what system better
suites need.
►
Need was formulated as “to be able to pre...
ElephantDB
►
https://github.com/nathanmarz/elephantdb
►
This is system created exactly for this case
►
It is capable of se...
ElephantDB
►
Berkly DB java edition is used to serve local
indexes. It is common with Voldmort which also
has such option....
ElephantDB – batch read
►
Having data sitting in the DFS in a MR friendly
format enable us to do scans right there.
►
Oppo...
Elephant DB - drawbacks
►
First one – is rare use. We already mentioned it
►
It is read only. In case we also need random
...
Voldemort...
Project - Voldemort
►
NoSQL
►
Pluggable Storage engines
►
Pluggable serialization (TBD)
►
Consistent hashing
►
Eventual co...
Voldemort logical architecture
How building data works
►
The job gets as parameter all cluster
configuration
►
Thereof it can build data specific for eac...
Pull vs Push
►
It was interesting decision of the Linkedin
engineers to implement pull.
►
The explanation is that Voldemor...
Performance
We tested on 3 node dedicated clusters with SSD.
►
Throughput – 5-6K reads per second barely
change CPU level....
Caching remarks
►
Voldemort (as well as MongoDB) is not develop
own caching mechanism but offload it to OS.
►
It is done b...
Voldemort summary
For:
►
Easy to install. It took 2 hours to build the cluster
even without installer..
►
Pluggable storag...
Method limitation
There is limit in pre-computing way when number
of dimension grow.
What we are doing – we have proprieta...
ElephantDB information used
►
http://www.slideshare.net/nathanmarz/
elephantdb
►
http://computerhelpkansascity.blogspot.co...
Upcoming SlideShare
Loading in …5
×

The challenge of serving large amount of batch-computed data

1,132 views

Published on

This is slides from our meetup about the subj. Based on SimilarGroup case.
http://www.meetup.com/HadoopIsrael/events/142131622/

Published in: Design, Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,132
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
12
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

The challenge of serving large amount of batch-computed data

  1. 1. A very BIG data Company The challenge of serving massive batch-computed data sets on-line
  2. 2. The challenge of serving massive batch-computed data sets online David Gruzman
  3. 3. Serving batch-computed data by David Gruzman ► Today we will discuss the case when we have multi-terabyte dataset which is periodically recalculated and have to be served in the real time. ► SimilarWeb allowed us to reveal internals of their
  4. 4. Similar Web data flow – the context ► Company assemble billions of events from their panel on the daily basis. ► Fast growing Hadoop cluster is used to process this data using various kinds of statistical analysis and machine learning. ► The data model is “web scale”. The data derived from the raw events is processed into “top pages”,”demography”, “keywords” and many other metrics company assemble. ► Problem dimensionality is: Per domain, per day, per country. More dimensions might appear.
  5. 5. How data is calculated ► Data is imported into HDFS from the farm of application servers. ► Set of MR Jobs as well as Hive scripts is used to do data processing. ► Result data has a common structure of the “key-value” where key – our dimensions or their subset. For example Key: “cnn.com_01012013_USA” Value: “Top Pages: Page1, …. statistics:.... “
  6. 6. Abstract schema of the relevant part of SimilarWeb IT App Server s Hadoop – Map Reduce Hadoop – Hbase Stage Hbase Production Hbase Production
  7. 7. Hbase under heavy inserts ► First of all – it do works ► The question – what was done...
  8. 8. Hbase : Split storms ► When you insert data evenly into many regions all of them starts splitting roughly in the same time. Hbase does not like it... It became not available, insertion job failes, leases expired etc... ► Solution : pre split table and disable automatic split. ► Price : it is hard to achieve even distribution of the data among regions. Hotspots possible...
  9. 9. Compaction storms ► Under heavy load to all regions – all of them starting minor compaction in the same time ► Results are similar to the split storm... Nothing good.
  10. 10. Inherent problem – delayed work ► Hbase does not do ALL work required during insert. ► Part of the work Delayed till the compaction. ► System who delay work is inherently problematic for the prolonged high load. ► It is good to work with spikes of activity, not with steady heavy load.
  11. 11. Massive insert problem ► There is a lot of overhead in randomly insert data. ► What happens that MapReduce produce already sorted data and Hbase is sorting it again. ► Hbase is sorting data constantly, while MR do it in the batch what is inherently more efficient ► Hbase is strongly consistent system and under heavy load all kinds of problems (leasing related) happens
  12. 12. Domino effect
  13. 13. HBase Snapshots – come to rescue ► Snapshot is capability to get “point in time” state of the table. ► Technically snapshot is list of files which constitute the table. So taking snapshot is pure meta-data operation. ► When files are to be deleted for the table they are moved to the archive directory. ► Thus all operation like clone, restore – are just file renames and metadata changes.
  14. 14. Hbase – snapshot export Region Before 1 Before 2 File after Snapshot Before1 In Archive Before2 in archive Move / rename
  15. 15. Hbase – snapshot export ► There is additional capability of snapshots – export. ► Technically it is like DISTCP and even not required alive cluster on the destination side. Only HDFS has to be operational. ► What we gain – DISTCP speed and scalability. ► What happens – files are copied into archive directory. Hbase is using it's structure as a
  16. 16. So how snapshots help us? ► As you remember SimilarWeb has several Hbase clusters. One used as a company data- warehouse and two used to serve production ► So we prepare data on one cluster where we have long time-outs and then move it using snapshots to the production cluster.
  17. 17. So we get to the following solution App Serv ers Hadoop – Map Reduce Hadoop – Hbase Stage Hbase Production Hbase Production Snapshot export
  18. 18. Is it ideal? ► We effectively minimized impact on Hbase region servers ► But we left with Hbase high availability problem ► Currently we have two Hbase servers to overcome it ► It is working but it is far from ideal HW utilization
  19. 19. Conceptual problem ► In production we do not need strong consistency and we pay for it with Partition tolerance in CAP theorem. In practice – it is availability problem. ► We do not need random writes and most of Hbase is built for them ► We actually have more complex system then we need
  20. 20. BigTable vs Dynamo ► There are two kinds of NoSQLs – built after BigTable (Hbase, Hypertable) and after Dynamo (Cassandra, Voldemort …) ► BigTable – good for data warehouse. Capability to scan data ranges is important ► Dynamo – good for online serving since the systems are more high-available
  21. 21. Evaluation process ► We decided to do research what system better suites need. ► Need was formulated as “to be able to prepare data files offline and copy them into system by file level.” ► In addition – high availability is a must so systems built around consistent hashing idea were preferred.
  22. 22. ElephantDB ► https://github.com/nathanmarz/elephantdb ► This is system created exactly for this case ► It is capable of serving data from index prepared offline ► It is very simple – contains about 5K lines of code ► Main drawback – unknown... Very little known usages..
  23. 23. ElephantDB ► Berkly DB java edition is used to serve local indexes. It is common with Voldmort which also has such option. ► MR Job (Cascading) is used to prepare indexes. ► Indexes cached locally by the servers in the ring. ► There is MR job for incremental change of data.
  24. 24. ElephantDB – batch read ► Having data sitting in the DFS in a MR friendly format enable us to do scans right there. ► Opposite example – we usually scan Hbase table to process it using MR. When there is no filtering / predicate push-down it is serious waste of resources
  25. 25. Elephant DB - drawbacks ► First one – is rare use. We already mentioned it ► It is read only. In case we also need random writes – we will need to deploy another NoSQL.
  26. 26. Voldemort...
  27. 27. Project - Voldemort ► NoSQL ► Pluggable Storage engines ► Pluggable serialization (TBD) ► Consistent hashing ► Eventual consistency ► Support for batch-computed read-only stores
  28. 28. Voldemort logical architecture
  29. 29. How building data works ► The job gets as parameter all cluster configuration ► Thereof it can build data specific for each node
  30. 30. Pull vs Push ► It was interesting decision of the Linkedin engineers to implement pull. ► The explanation is that Voldemort as a system should be able to throttle data load in order to prevent system performance degradation.
  31. 31. Performance We tested on 3 node dedicated clusters with SSD. ► Throughput – 5-6K reads per second barely change CPU level. Documentation tells about 20K requests per node. ► Latency 10-15 milliseconds on not-cached data. We are researching this number. It sounds too much for SSD. ► 1 – 1.5 milliseconds for cached data.
  32. 32. Caching remarks ► Voldemort (as well as MongoDB) is not develop own caching mechanism but offload it to OS. ► It is done by doing MMAP of the data files. ► In my opinion – it is inferior approach since OS do not have application specific statistics, add not-needed context switches.
  33. 33. Voldemort summary For: ► Easy to install. It took 2 hours to build the cluster even without installer.. ► Pluggable storage engines. ► Support for efficient import of batch-computed data ► Open Source Against:
  34. 34. Method limitation There is limit in pre-computing way when number of dimension grow. What we are doing – we have proprietary layer build on LINQ and C# which makes missing aggregation We also evaluate Jethrodata which can do it in SQL way. It is RDBMS engine running on top of HDFS and gives full index with join and group by capability
  35. 35. ElephantDB information used ► http://www.slideshare.net/nathanmarz/ elephantdb ► http://computerhelpkansascity.blogspot.co.il/2012/06 html

×