Data science isn't an easy task to pull of. You start with exploring data and experimenting with models. Finally, you find some amazing insight! What now? How do you transform a little experiment to a production ready workflow? Better yet, how do you scale it from a small sample in R/Python to TBs of production data? Building a BIG ML Workflow - from zero to hero, is about the work process you need to take in order to have a production ready workflow up and running. Covering : * Small - Medium experimentation (R) * Big data implementation (Spark Mllib /+ pipeline) * Setting Metrics and checks in place * Ad hoc querying and exploring your results (Zeppelin) * Pain points & Lessons learned the hard way (is there any other way?)