Industrial big-data analysis tools like Hadoop are often unwieldy, cumbersome to use and expensive to maintain, especially if you're new to the field. It is well and good if you have a cluster set up and a handy team of engineers to maintain it. But what if you're a researcher, alone with a big dataset, a Linux machine (or, better, several) and no clue how to do Java? In this talk I am sharing a few handy tricks on how to quickly process and run preliminary analysis on large datasets with nothing more than LuaJIT and some shell magic.