Having more smaller nodes is better than having less faster bigger nodes.
Lots of RAM is good but only to a point, just avoid swap.
We use sub $1k desktop grade servers, they work great!
Check your network hardware for packet drops (we had outifDiscards interrupting zookeeper messages, Region servers would suicide during packet loss), just use ping -f to test for packet loss between core nodes.
JVM GC does take lots of CPU when misconfigured – e.g. Small NewSize
Single Namenode? No problem, just build two clusters have your APP tier do log query replication and replays when needed.
Inexpensive 2TB hitachi disks (~$100) work great, get more units for your money.
App Tier bugs would abuse Hbase, generate millions of queries – logging all RPC calls to HBASE on the App Tier is critical. Took us long time to figure out that Hbase was not at fault, because we did not know what to expect.
Various RAM brands – boxes crash for no reason.
Glibc in FC13 had race condition bug, would lock up nodes, crash JVM processes under high load. Solution: yum -y update glibc (invalid binfree)
When running in mixed hardware environment, some boxes were slow enough to affect HDFS for the whole cluster – looking at “runnable threads” and “fsreadlatency” in Ganglia always pointed which boxes were 'slow'
Running cloudera HDFS under user 'hadoop', that was restricted to 1024 threads by default would crash datanodes, but only during compactions. Setting hadoop soft(and hard) nproc 32,000 in limits.conf resolved it.
GC sometimes autotunes NewSize of 20MB, caused GC run to 20 or 30 per second, causing CPU to flatline at 100% and kill the RS. Manually setting to 128MB resolved this issue.