Since the MapReduce is running within a
cluster of computing nodes, the architecture is
• In other words, if the data size is increased by
the factor of x, the performance should be still
constant if we are adding a predictable/fixed
factor of y.
The graph on the right is illustrating the
relationship between the size of the data (xaxis) and processing time (y-axis).
•The blue color curve is the process using
traditional programming. On the other hand, the
black color curve is the process using Hadoop.
When the data size is small, traditional
programming is better performance because the
bootstrap of Hadoop is expensive (Copy the data
within the cluster, inter-nodes communication,
Once the data size is big enough, the penalty
of the Hadoop bootstrap becomes invisible.
•Hence Hadoop is best suited for Big Data
crunching ideally in terms of petaBytes and is
not suited for implementing common data