4. 1st Generation Hadoop: Batch Focus
HADOOP 1.0
Built for Web-Scale Batch Apps
Single App
BATCH
HDFS
Single App
INTERACTIVE
Single App
BATCH
HDFS
All other usage patterns
MUST leverage same
infrastructure
Forces Creation of Silos to
Manage Mixed Workloads
Single App
BATCH
HDFS
Single App
ONLINE
6. Hadoop 1 Limitations
Scalability
Max Cluster size ~5,000 nodes
Max concurrent tasks ~40,000
Coarse Synchronization in JobTracker
Availability
Failure Kills Queued & Running Jobs
Hard partition of resources into map and reduce slots
Non-optimal Resource Utilization
Lacks Support for Alternate Paradigms and Services
Iterative applications in MapReduce are 10x slower
7. YARN (Yet Another Resource Negotiator)
• Apache Hadoop YARN is a cluster
management technology;
• One of the key features in second-
generation Hadoop;
• Next-generation compute and
resource management framework in
Apache Hadoop;
11. Data Processing Engines Run Natively IN Hadoop
BATCH
MapReduce
INTERACTIVE
Tez
STREAMING
Storm, S4, …
GRAPH
Giraph
MICROSOFT
REEF
SAS
LASR, HPA
ONLINE
HBase
OTHERS
Apache YARN
HDFS2: Redundant, Reliable Storage
YARN: Cluster Resource Management
Flexible
Enables other purpose-built data
processing models beyond
MapReduce (batch), such as
interactive and streaming
Efficient
Double processing IN Hadoop on
the same hardware while
providing predictable
performance & quality of service
Shared
Provides a stable, reliable,
secure foundation and
shared operational services
across multiple workloads
The Data Operating System for Hadoop 2.0
13. Key Improvements in YARN
Framework supporting multiple applications
– Separate generic resource brokering from application logic
– Define protocols/libraries and provide a framework for custom
application development
– Share same Hadoop Cluster across applications
Application Agility and Innovation
– Use Protocol Buffers for RPC gives wire compatibility
– Map Reduce becomes an application in user space unlocking
safe innovation
– Multiple versions of an app can co-exist leading to
experimentation
– Easier upgrade of framework and applications
14. Key Improvements in YARN
Scalability
– Removed complex app logic from RM, scale further
– State machine, message passing based loosely coupled design
Cluster Utilization
– Generic resource container model replaces fixed Map/Reduce
slots. Container allocations based on locality, memory (CPU
coming soon)
– Sharing cluster among multiple applications
Reliability and Availability
– Simpler RM state makes it easier to save and restart (work in
progress)
– Application checkpoint can allow an app to be restarted.
MapReduce application master saves state in HDFS.
18. YARN Best Practices
Use provided Client libraries
Resource Negotiation
–You may ask but you may not get what you want - immediately.
–Locality requests may not always be met.
–Resources like memory/CPU are guaranteed.
Failure handling
–Remember, anything can fail ( or YARN can pre-empt your
containers)
–AM failures handled by YARN but container failures handled by the
application.
Checkpointing
–Check-point AM state for AM recovery.
–If tasks are long running, check-point task state.
19. YARN Best Practices
Cluster Dependencies
–Try to make zero assumptions on the cluster.
–Your application bundle should deploy everything required using
YARN’s local resources.
Client-only installs if possible
–Simplifies cluster deployment, and multi-version support
Securing your Application
–YARN does not secure communications between the AM and its
containers.
20. YARN Future Work
ResourceManager High Availability and Work-preserving restart
–Work-in-Progress
Scheduler Enhancements
–SLA Driven Scheduling, Low latency allocations
–Multiple resource types – disk/network/GPUs/affinity
Rolling upgrades
Long running services
–Better support to running services like HBase
–Discovery of services, upgrades without downtime
More utilities/libraries for Application Developers
–Failover/Checkpointing
22. Challenge Using Big Data & Cloud for
SDN
• The tools not available yet
• Do we need standards?
• Once you've mined big data, then
what?
23. Different types of traffic in
Hadoop Clusters
• Background Traffic
–Bulk transfers
–Control messages
• Active Traffic (used by jobs)
–HDFS read/writes
–Partition-Aggregate traffic
24. Typical Traffic Patterns
– Patterns used by Big Data Analytics
– You can optimize specifically for theses
Map Map Map Reduce Reduce
HDFS
Map Map Map
HDFS
Reduce Reduce
ShuffleBroadcast Incast
25. Approach Optimizing the
Network to Improve Performance
• Helios, Hedera, MicroTE, c-thru
– Congestion leads to bad performance
– Eliminate congestion
Gather
Network
Demand
Determine paths
with minimal
congestion
Install New
paths
26. Disadvantage of Existing
Approach
• Demand gather at network is ineffective
– Assumes that past demand will predict
future
– Many small jobs in cluster so ineffective
• May Require expensive instrumentation to
gather
– Switch modifications
– Or end host modification to gather
information
27. Application Aware Run Time
Network Configuration Practice
• Topology construction and routing for
aggregation, shuffling, and overlapping
aggregation traffic patterns;
Traffic Demand Estimation
Network-aware Job Scheduling
Topology and Routing
30. How Can That Be Done?
• Reactively
o Job tracker places the task; it knows the locations
• Check the Hadoop logs for the locations
• Modify the job tracker to directly inform application
• Proactively
o Have the SDN controller tell the job tracker where to
place the end-points
• Rack aware placement: reduce inter-rack transfers
• Congestion aware placement: reduce loss
31. Reactive Approach
• Reactive attempt to integrate big data +
SDN
– No changes to application
– Learn information by looking at logs and
determine file size and end-points
– Learn information by running agents on
the end host that determines start times
32. Reactive Architecture
P redictor S cheduler
Flow C om b
H adoop
cluster
Agents
C ontroller
ure 1: Flow C om b consists of three m odules: flow
diction,flow scheduling,and flow control.
• Agents on servers
– Detect start/end of map
– Detect start/end transfer
• Predictor
– Determines size of
intermediate data
• Queries Map Via API
– Aggregates information
from agents sends to
scheduler
33. Reactive Architecture
P redictor S cheduler
Flow C om b
H adoop
cluster
Agents
C ontroller
ure 1: Flow C om b consists of three m odules: flow
diction,flow scheduling,and flow control.
• Scheduler
– Examines each flow
that has started
– For each flow what is
the ideal rate
– Is the flow currently
bottlenecked?
• Move to the next
shortest path with
available capacity
34. Proactive Approach
• Modify the applications
–Have them directly inform network of
intent
• Application inform network of co-flow
–Group of flows bound by app level
semantics
–Controls network path, transfer
times, and transfer rate
36. What we intend to do?
• Traditional network models to construct
elements such as switches, subnets, and
(ACLs), without application awareness and
correspondingly cause over provisioning;
• Service level network profile model provides
higher level connectivity and policy
abstractions;
• SDN controller platform supports service-
profile model being integral parts of network
planning and provisioning process;
37. Network Profile Abstraction Model
• Declaratively define network logical topologies
model to specify logical connectivity and policies
or services;
39. Application Integration Layer
• Present applications with a network model and
associated APIs that expose the information
needed to interact with the network;
• Provide network services to application using
query API, which allows the application to send
requests for abstract topology views, or
gathering performance metrics and status for
specific parts of the network;
40. Network Abstraction Layer
• Perform a logical-to-physical translation of commands
issued through the abstraction layer and convert these
API calls into the appropriate series of commands;
• Provide a set of network-wide services to applications,
such as views of the topology, notifications of changes in
link availability or utilization, and path computation
according to different routing algorithms;
• Coordinate between network requests issued by
applications, and mapping of those requests onto the
network, such as selecting between multiple mechanisms
available to achieve a given operation, setting up a virtual
network using an overlay;
41. Network Driver Layer
• Enable the SDN controller to interface with various
network technologies or tools;
• The orchestration layer uses these drivers to issue
commands on specific devices, i.e. an Open Flow-
capable network driver could allow insertion of flow
rules in physical or virtual switches;
• Support other drivers to enable virtual network
creation using overlays and topology data gathered
by 3rd party network management tools such as IBM
Tivoli Network Manager, HP Openview;
42. Network Services Provided
• Network Planning & Design: Implements
network services as one or more plans and
provide workflow mechanism for scheduling
task;
• Network Topology Deploying: Manage the
execution and state of proposed plans via
multiple states, such validate, install, undo,
resume;
• Maintenance Service: monitor view of the
dynamic set of underlying network
resources;