The modern web-scale network is a pretty complicated place. Modern techniques in Systems Management have made it trivial to create, destroy and repurpose any number of instance types. These instances can span the range from bare metal machines sitting in a datacenter, to 3rd party virtual machines on demand, and now these new containers and microservices seem to be all the rage. Instances are cattle, they are no longer pets. All of this perpetual churn and flexibility is exactly what you want in a constantly changing, highly available, and efficient infrastructure. The ability to create or destroy nodes on demand, or continuously and automatically scale up, down, and re-deploy applications as part of a continuous integration pipeline, have become necessary and an integral part of daily operations. However these systems can generate terabytes of network logs a day. And if your job is detecting, correlating, and alerting on the correct anomaly in all that data, the analogy of the needle in the haystack really doesn’t do it justice, something closer would be akin to finding a needle in the windstorm. How do you begin to collect, store, analyze, and alert on this much data without costing the company a small fortune? What are some practical steps you can take to reduce your overall risk and begin to gain more insight, visibility, and confidence into what is actually taking place on your network? This talk aims to give the attendee a solid understanding of the problem space, as well as recommendations and practical advice from someone who built their own ‘big data’ network and security monitor. It really is easier than it sounds.