“Fact: There are more Hadoop configuration options than there are stars our galaxy.”
EVEN IN THE BEST CASE SCENARIO, IT TAKES A LOT OF TUNING TO GET A HADOOP CLUSTER RUNNING WELL.There are large companies that make money soley by configuring and supporting hadoop clusters for enterprise customers.
PIG AND ELASTICMAPREDUCESlow development cycle; writing Java sucks.
CASCALOG AND ELASTICMAPREDUCELearning Emacs, Clojure, and Cascalog was hard, butwas worth it.The way our jobs were designed sucked and didntwork well with ElasticMapReduce
CASCALOG AND SELF-MANAGED HADOOP CLUSTERWe used a hacked up version of a cloudera pythonscript to launch and bootstrap a cluster.We ran on spot instancesCluster boot up time SUCKED and often failed. Wepaid for instances during bootstrap and configurationOur jobs werent designed to tolerate things like spotinstances going away in the middle of a job.Drinking heavily dulled the pain a little.
CASCALOG AND ELASTICMAPREDUCE AGAINRebuilt data processing pipeline from scratch (onlytook nine months!)Data pipelines were broken out into a handful of fault-tolerant jobflow steps; each steps writes output to S3.EMR supported spot instances at this point.
WEIRD BUGS THAT WEVE HITBootstrap script errorsRandom cluster fuckedupednessAMI version changesVendor issuesMy personal favourite: Invisible S3 write failures.
IF YOU MUST RUN ON AWSBreak your processing pipelines into stages; write outto S3 after each stage.Bake in (a lot) of variability into your expected jobflowrun times.Compress the data your are reading and writing fromS3 as much as possible.Drinking helps.