Interactive Hadoop via Flash and Memory

652
-1

Published on

Enterprises are using Hadoop for interactive real-time data processing via projects such as the Stinger Initiative. We describe two new HDFS features – Centralized Cache Management and Heterogeneous Storage – that allow applications to effectively use low latency storage media such as Solid State Disks and RAM. In the first part of this talk, we discuss Centralized Cache Management to coordinate caching important datasets and place tasks for memory locality. HDFS deployments today rely on the OS buffer cache to keep data in RAM for faster access. However, the user has no direct control over what data is held in RAM or how long it?s going to stay there. Centralized Cache Management allows users to specify which data to lock into RAM. Next, we describe Heterogeneous Storage support for applications to choose storage media based on their performance and durability requirements. Perhaps the most interesting of the newer storage media are Solid State Drives which provide improved random IO performance over spinning disks. We also discuss memory as a storage tier which can be useful for temporary files and intermediate data for latency sensitive real-time applications. In the last part of the talk we describe how administrators can use quota mechanism extensions to manage fair distribution of scarce storage resources across users and applications.

Published in: Software, Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
652
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
22
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Interactive Hadoop via Flash and Memory

  1. 1. © Hortonworks Inc. 2011 Interactive Hadoop via Flash and Memory Arpit Agarwal aagarwal@hortonworks.com @aagarw Chris Nauroth cnauroth@hortonworks.com @cnauroth Page 1
  2. 2. © Hortonworks Inc. 2011 HDFS Reads Page 2 Architecting the Future of Big Data
  3. 3. © Hortonworks Inc. 2011 HDFS Short-Circuit Reads Page 3 Architecting the Future of Big Data
  4. 4. © Hortonworks Inc. 2011 HDFS Short-Circuit Reads Page 4 Architecting the Future of Big Data
  5. 5. © Hortonworks Inc. 2011 Shortcomings of Existing RAM Utilization • Lack of Control – Kernel decides what to retain in cache and what to evict based on observations of access patterns. • Sub-optimal RAM Utilization – Tasks for multiple jobs are interleaved on the same node, and one task’s activity could trigger eviction of data that would have been valuable to retain in cache for the other task. Page 5 Architecting the Future of Big Data
  6. 6. © Hortonworks Inc. 2011 Centralized Cache Management • Provides users with explicit control of which HDFS file paths to keep resident in memory. • Allows clients to query location of cached block replicas, opening possibility for job scheduling improvements. • Utilizes off-heap memory, not subject to GC overhead or JVM tuning. Page 6 Architecting the Future of Big Data
  7. 7. © Hortonworks Inc. 2011 Using Centralized Cache Management • Pre-Requisites – Native Hadoop library required, currently supported on Linux only. – Set process ulimit for maximum locked memory. – Configure dfs.datanode.max.locked.memory in hdfs-site.xml, set to the amount of memory to dedicate towards caching. • New Concepts – Cache Pool – Contains and manages a group of cache directives. – Has Unix-style permissions. – Can constrain resource utilization by defining a maximum number of cached bytes or a maximum time to live. – Cache Directive – Specifies a file system path to cache. – Specifying a directory caches all files in that directory (not recursive). – Can specify number of replicas to cache and time to live. Page 7 Architecting the Future of Big Data
  8. 8. © Hortonworks Inc. 2011 Using Centralized Cache Management Page 8 Architecting the Future of Big Data
  9. 9. © Hortonworks Inc. 2011 Using Centralized Cache Management • CLI: Adding a Cache Pool > hdfs cacheadmin -addPool common-pool Successfully added cache pool common-pool. > hdfs cacheadmin -listPools Found 1 result. NAME OWNER GROUP MODE LIMIT MAXTTL common-pool cnauroth cnauroth rwxr-xr-x unlimited never Page 9 Architecting the Future of Big Data
  10. 10. © Hortonworks Inc. 2011 Using Centralized Cache Management • CLI: Adding a Cache Directive > hdfs cacheadmin -addDirective -path /hello-amsterdam -pool common-pool Added cache directive 1 > hdfs cacheadmin -listDirectives Found 1 entry ID POOL REPL EXPIRY PATH 1 common-pool 1 never /hello-amsterdam Page 10 Architecting the Future of Big Data
  11. 11. © Hortonworks Inc. 2011 Using Centralized Cache Management • CLI: Removing a Cache Directive > hdfs cacheadmin -removeDirective 1 Removed cached directive 1 > hdfs cacheadmin -removeDirectives -path /hello-amsterdam Removed cached directive 1 Removed every cache directive with path /hello-amsterdam Page 11 Architecting the Future of Big Data
  12. 12. © Hortonworks Inc. 2011 Using Centralized Cache Management • API: DistributedFileSystem Methods public void addCachePool(CachePoolInfo info) public RemoteIterator<CachePoolEntry> listCachePools() public long addCacheDirective(CacheDirectiveInfo info) public RemoteIterator<CacheDirectiveEntry> listCacheDirectives(CacheDirectiveInfo filter) public void removeCacheDirective(long id) Page 12 Architecting the Future of Big Data
  13. 13. © Hortonworks Inc. 2011 Centralized Cache Management Behind the Scenes Page 13 Architecting the Future of Big Data
  14. 14. © Hortonworks Inc. 2011 Centralized Cache Management Behind the Scenes • Block files are memory-mapped into the DataNode process. > pmap `jps | grep DataNode | awk '{ print $1 }'` | grep blk 00007f92e4b1f000 124928K r--s- /data/dfs/data/current/BP- 1740238118-127.0.1.1- 1395252171596/current/finalized/blk_1073741827 00007f92ecd21000 131072K r--s- /data/dfs/data/current/BP- 1740238118-127.0.1.1- 1395252171596/current/finalized/blk_1073741826 Page 14 Architecting the Future of Big Data
  15. 15. © Hortonworks Inc. 2011 Centralized Cache Management Behind the Scenes • Pages of each block file are 100% resident in memory. > vmtouch /data/dfs/data/current/BP-1740238118-127.0.1.1- 1395252171596/current/finalized/blk_1073741826 Files: 1 Directories: 0 Resident Pages: 32768/32768 128M/128M 100% Elapsed: 0.001198 seconds > vmtouch /data/dfs/data/current/BP-1740238118-127.0.1.1- 1395252171596/current/finalized/blk_1073741827 Files: 1 Directories: 0 Resident Pages: 31232/31232 122M/122M 100% Elapsed: 0.00172 seconds Page 15 Architecting the Future of Big Data
  16. 16. © Hortonworks Inc. 2011 HDFS Zero-Copy Reads • Applications read straight from direct byte buffers, backed by the memory-mapped block file. • Eliminates overhead of intermediate copy of bytes to buffer in user space. • Applications must change code to use a new read API on DFSInputStream: public ByteBuffer read(ByteBufferPool factory, int maxLength, EnumSet<ReadOption> opts) Page 16 Architecting the Future of Big Data
  17. 17. © Hortonworks Inc. 2011 Heterogeneous Storages for HDFS Architecting the Future of Big Data Page 17
  18. 18. © Hortonworks Inc. 2011 Goals • Extend HDFS to support a variety of Storage Media • Applications can choose their target storage • Use existing APIs wherever possible Page 18 Architecting the Future of Big Data
  19. 19. © Hortonworks Inc. 2011 Interesting Storage Media Page 19 Architecting the Future of Big Data Cost Example Use case Spinning Disk (HDD) Low High volume batch data Solid State Disk (SSD) 10x of HDD HBase Tables RAM 100x of HDD Hive Materialized Views Your custom Media ? ?
  20. 20. © Hortonworks Inc. 2011 HDFS Storage Architecture - Before Page 20 Architecting the Future of Big Data
  21. 21. © Hortonworks Inc. 2011 HDFS Storage Architecture - Now Page 21 Architecting the Future of Big Data
  22. 22. © Hortonworks Inc. 2011 Storage Preferences • Introduce Storage Type per Storage Medium • Storage Hint from application to HDFS –Specifies application’s preferred Storage Type • Advisory • Subject to available space/quotas • Fallback Storage is HDD –May be configurable in the future Page 22 Architecting the Future of Big Data
  23. 23. © Hortonworks Inc. 2011 Storage Preferences (continued) • Specify preference when creating a file –Write replicas directly to Storage Medium of choice • Change preference for an existing file –E.g. to migrate existing file replicas from HDD to SSD Page 23 Architecting the Future of Big Data
  24. 24. © Hortonworks Inc. 2011 Quota Management • Extend existing Quota Mechanisms • Administrators ensure fair distribution of limited resources Page 24 Architecting the Future of Big Data
  25. 25. © Hortonworks Inc. 2011 File Creation with Storage Types Page 25 Architecting the Future of Big Data
  26. 26. © Hortonworks Inc. 2011 Move existing replicas to target Storage Type Page 26 Architecting the Future of Big Data
  27. 27. © Hortonworks Inc. 2011 Transient Files (Planned feature) • Target storage type is Memory –Writes will go to RAM –Allow short circuit writes equivalent to Short circuit reads to local in-memory block replicas • Checkpoint files to disk by changing storage type • Or discard • High performance writes For Low volume transient data –e.g. Hive Materialized Views Page 27 Architecting the Future of Big Data
  28. 28. © Hortonworks Inc. 2011 References • http://hortonworks.com/blog/heterogeneous-storages-hdfs/ • HDFS-2832 – Heterogeneous Storages phase 1 – DataNode as a collection of storages • HDFS-5682 – Heterogeneous Storages phase 2 – APIs to expose Storage Types • HDFS-4949 – Centralized cache management in HDFS Page 28 Architecting the Future of Big Data
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×