Building a Hyper-Secure VPC on AWS with Puppet - PuppetConf 2013

Uploaded on

"Building a Hyper-Secure VPC on AWS with Puppet" by Tim Nolet, Technical Architect, Xebia. …

"Building a Hyper-Secure VPC on AWS with Puppet" by Tim Nolet, Technical Architect, Xebia.

Presentation Overview: This session will describe the techniques and patterns used in a real life project where the goal was to: build a VPC on AWS, make it extremely secure on all accounts, do it automated.

I will describe how you can take Puppet and AWS and introduce all kinds of real life security measures, all managed by Puppet. These security measures include: Log collection and analysis (in combination with Graylog2), Transparent Proxy Hosts for DMZ separation, Host Based Firewalls to augment the non-logging AWS firewalls/security groups, CIS (Center for Internet Security) Benchmark enforcement on standard AWS Linux AMIs, change tracking with SVN.

Speaker Bio: Tim Nolet is an infrastructure architect and continuous delivery consultant working for Xebia ( Brought up on a steady diet of Java enterprise applications, he has helped his customers design, build and manage internet infrastructures in diverse areas of travel, retail, banking, energy and public services. Currently, he is on a mission to reap all the benefits of automated deployment and cloud engineering to deliver fast, safe and stable applications. Together with Amazon Web Services, Puppet plays a major role in this mission. Tim also smiles when you let him dive deep into performance, security and stability issues, or let him play guitar for a day.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads


Total Views
On Slideshare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide


  • 1. Puppetconf 2013 Building a Hyper Secure VPC on AWS with Puppet Tim Nolet
  • 2. Architect at Xebia (the Netherlands) Linux/Java/Cloud/Automation/Operations
  • 3. Holland = The Netherlands Image:
  • 4. I tend to ramble...
  • 5. The Assignment
  • 6. The Assignment (1) 1. Build a general purpose VPC on AWS 2. Standardize application deployment 3. Apply company security policies
  • 7. The Assignment (2) 1. Do it with Open Source 2. Use AWS standards 3. Stay close to reference implementations
  • 8. AWS and security IAM, MFA, HSM SSL, SSH, VPN ISO 27001 PCI-DSS PGP ..and probably some more acronyms
  • 9. Design Principles A Grid based on: 3 x Availability Zone 3 x Tier: web, app, data 1 x Management subnet
  • 10. Design Principles Reference stacks Implemented in CloudFormation Provision: EC2 instances Security Groups RDS instances ELB loadbalancers RDS instances etc.
  • 11. public_three_tier_stack_redundant_rds.template
  • 12. AMI Hardening 1. Apply CIS Benchmark for RedHat Linux 2. Log +Alert on any discrepancies 3. Monitor YUM security updates Benchmark:
  • 13. CIS Benchmark Module manifests/ 1_software.pp 2_osservices.pp 3_specialservices.pp 4_network.pp 5_logaudit.pp 6_accessauth.pp 7_user.pp 8_banners.pp 9_maintenance.pp init.pp =>
  • 14. Coooode! # 1.6 Additional Process Hardening # 1.6.1 Restrict Core Dumps file { "/etc/security/limits.conf": source => "puppet:///modules/cis_baseline/limits.conf", ensure => "present", group => "0", mode => "644", owner => "0", } # 1.6.2 Configure ExecShield file_line { "Execshield": path => "/etc/sysctl.conf", line => "kernel.exec-shield = 1", }
  • 15. Hacking /etc/pam.d/su Allows only users in the `wheel` group to use `su` # 6.5 Restrict Access to the su Command augeas { "pam.d/su": context => "/files/etc/pam.d/su/", changes => [ "ins 01 after *[module = ''][control = 'sufficient'][type ='auth'] [last()]", "set 01/type auth", "set 01/control required", "set 01/module", "set 01/argument use_uid",], onlyif => "match *[type = 'auth'][control = 'required'][module = ''] [argument = 'use_uid'] size == 0", }
  • 16. Tagging dependent modules IPtables is managed by it own module We check if it is included using the `tagged` function # 4.7 Enable IPtables # CIS Rule 4.7 should be enforced through the iptables/firewall module. # We only notify if it is not running if tagged("firewall_base") { notice ("CIS rule 4.7 Enable IPtables is installed and enabled") } else { alert{ "CIS rule 4.7 Enable IPtables is not installed": } }
  • 17. Tags: order is important
  • 18. Actual IP of the Graylog2 host is in Hiera Central Logging Rsyslog => Graylog2 /etc/rsyslog.conf # Forward all logs to central logging server *.* @<%= central_log_app_server %> #udp forwarding
  • 19. Sorting Searching Alerting Graphing ...basically a SIEM on the cheap
  • 20. Network traffic logging Why? AWS Security Groups and Network ACL's don't log anything
  • 21. Network traffic logging How? Puppet + IPtables + Rsyslog + Graylog2 Extending the puppetlabs_firewall module from the forge
  • 22. Allow/Drop/Log 1. Allow or Drop connections 2. Tag initial connections, on both dropped and allowed 3. Don't tag established and related connections 4. Log to Graylog2 via rsyslog
  • 23. Let Related and Established pass through unharmed Allow/Drop/Log firewall { "000 INPUT allow related and established": state => ["RELATED", "ESTABLISHED"], action => "accept", chain => "INPUT", proto => "all", }
  • 24. Allow/Drop/Log firewallchain { 'LOGNEW:filter:IPv4': ensure => present, } firewall { "100 Log all NEW connections": chain => "LOGNEW", log_level => "info", log_prefix => "FIREWALL TCP INBOUND ", jump => "LOG", } firewall { "101 Accept the connection": chain => "LOGNEW", action => "accept", } Create a "LOGNEW" chain for all NEW connections Tag them with a prefix and jump them to the LOG target Then accept the connections
  • 25. Jump your allowed traffic to the LOGNEW chain Allow/Drop/Log firewall { "100 allow ssh": state => ["NEW"], dport => "22", proto => "tcp", jump => "LOGNEW" }
  • 26. Exceptions... Proxies DNS Database running nodes Other bridging type nodes
  • 27. Custom Facter to the rescue! IP ranges match the GRID Availability zone Tier
  • 28. Av.Zone custom Fact def get_avzone ipaddress = Facter.value(:ipaddress) if Facter.value(:tier) == "management" av_zone = "zone_1b" elsif ipaddress =~(/^.*.*.*.([012345][0-9]|6[0-2])$/) avzone = "zone_1a" elsif ipaddress =~(/^.*.*.*.(6[5-9]|[789][0-9]|1[0-1][0-9]|12[0-6])$/) avzone = "zone_1b" elsif ipaddress =~(/^.*.*.*.(129|1[3-8][0-9]|190)$/) avzone = "zone_1c" else avzone = "default" end end
  • 29. Done!
  • 30. Good/Bad/Plain Ugly
  • 31. Good Community!
  • 32. Good Graylog2 is great and extremely flexible
  • 33. Good VPC is the way to go on AWS CloudFormation's power is incredible
  • 34. Bad Performance of large catalogs with Puppet 2.7 file { "/etc/somedirectory": recurse => true, ignore => ["work","temp","log"], checksum => none } Hiera-GPG is cumbersome to say the least
  • 35. Bad JSON notation of CloudFormation templates ...meh Tip: CFNDSL = Ruby DSL for CloudFormation templates
  • 36. Ugly Unified state and life cycle management
  • 37. Ugly Everything is automated, but using it's own: 1. DSL 2. Authentication/Authorization 3. Paradigms 4. Versioning 5. You name it...
  • 38. Ugly One single source of truth for: 1. Audit trail / logging 2. Instance status 3. Application status 4. CRUD actions on the whole infrastructure
  • 39. Hope?! RightScale, Scalr, Cloudify and similar? AWS OpsWorks?
  • 40. Hope?! Not third party or a plugin Part of the core Not SaaS only Enterprise Cloud Provisioning, Configuration Management and Application Deployment
  • 41. Rant over...
  • 42. Questions?