2017 - LISA - LinkedIn's Distributed Firewall (DFW)
1. Distributed Firewall (DFW)
Mike Svoboda
Sr. Staff Engineer, Production Infrastructure Engineering
LinkedIn: https://www.linkedin.com/in/mikesvoboda/
2. Agenda for today’s discussion
Slides
5-8
Problem 1: Moving machines around in the datacenter
to create a DMZ
Slides
11-29
Problem 2: Horizontal vs Vertical Network design
Slides
30-40
What is Distributed Firewall?
Slide
42
References
Q/A Session
5. Script Kiddie Hacking: Easy network attack vectors
• Port Scanning – What is the remote device responding to?
• Enumeration- Gather information about services running on the target machine
• Data Extraction- Pull as much valuable information from the remote service as possible
6. Wake up call!
PHYSICALLY MOVING MACHINES IN THE DATACENTER DOESN’T SCALE!
• Providing additional layers of network security
to an application requires either physically
moving machines around in the datacenter, or
rewire network cables to create DMZs.
• DFW to complement existing network firewall
ACL systems, not replace it.
• Additional layer of security in our infrastructure
to complement existing systems.
How can we respond?
Move the machines into a DMZ behind a network
firewall, limiting network connectivity?
7. Production Network Security
TREAT THE PRODUCTION NETWORK AS IF IT’S THE PUBLIC INTERNET.
• Milton in the finance department clicked on a
bad email attachment and now has malware
on his workstation. Thanks Milton, appreciate
that.
• Milton’s workstation resides inside the internal
office network, which has the ability to connect
to application resources in Staging, Q/A, or
Production servers.
• Milton is one employee out of thousands.
8. Production Network Security
TREAT THE PRODUCTION NETWORK AS IF IT’S THE PUBLIC INTERNET.
• The hacker who has control of Milton’s machine was
able to exploit one application out of thousands, and
now has full production network access.
• The hacker can take their time analyzing various
production services, probing what responds to API
calls.
• What are the details behind the Equifax leak(s)?
10. The Vertical Network Architecture
• Big iron switches deployed at
the entry point of the
datacenter with uplink access
to LinkedIn’s internal
networks.
• More big iron switches at the
second and their tier of the
network.
• This image is a logical
representation, at minimum,
1k servers, upwards of 5k.
DATACENTER CLUSTERS PER ENVIRONMENT
11. The Vertical Network Architecture
• Each packet between environments has to flow through
thousands of rules before hitting a match.
• Firewall admin has to fit the entire security model into
their brain. This is error prone and difficult to update.
• TCAM tables are stored in hardware silicon. We’re
limited on the complexity that can be enforced.
• Hardware ASICs are fast, but expensive! Deploying
big iron costs millions of dollars!
DATACENTER CLUSTERS PER ENVIRONMENT
12. The Vertical Network Architecture
• Traffic shifts become problematic, as not all ACLs
exists in every CRT.
• TCAM tables can only support complexity of the
environment they host, not all “PROD” ACLs. It could
support the “PROD1” logical implementation of
linkedin.com, but not “PROD2” and “PROD3”
application fabrics.
• Human cost of hand maintaining per-application CRT
ACLs rises exponentially.
MULTIPLE CLUSTERS PER DATACENTER
13. The Horizontal Network Architecture
• Instead of scaling vertically, scale horizontally using
interconnected pods. Ofter multiple paths for machines
to communicate with each other.
• Allow datacenter engineering to maximize resources
• The “cluster” is too large of a deployment. Sometimes
we need to add capacity to an environment down to the
cabinet level
BUILD PODS INSTEAD OF CLUSTERS
14. 1
Present: Altair Design
Pod 1
ToRX ToR32ToRYToR1
Pod X
ToRX ToR32ToRYToR1
Pod Y
ToRX ToR32ToRYToR1
Pod 64
ToRX ToR32ToRYToR1
Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1
Spine32SpineYSpineXSpine1 Spine1 SpineX SpineY Spine32 Spine1 SpineX SpineY Spine32Spine32SpineYSpineXSpine1
ToR
Leaf
Spine
True 5 Stage Clos Architecture (Maximum Path Length: 5 Chipsets to Minimize Latency)
Moved complexity from big boxes to our advantage, where we can manage and control!
Single SKU - Same Chipset - Uniform IO design (Bandwidth, Latency and Buffering)
Dedicated control plane, OAM and CPU for each ASIC
15. Non-Blocking Parallel Fabrics
1
Fabric 4
Fabric 3
Fabric 2
Fabric 1
ToR
ServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServer
ToR
ServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServer
23. Tier 1
ToR - Top of the Rack
Broadcom Tomahawk 32x 100G
10/25/50/100G Attachement
Regular Server Attachement 10G
Each Cabinet: 96 Dense Compute units
Half Cabinet (Leaf-Zone) 48x 10G port for servers + 4 uplinks of 50G
Full Cabinet: 2x Single ToR Zones: 48 + 48 = 96 Servers
2
Project Falco
ToR
Server
Leaf
Spine Spine
Leaf Leaf Leaf
Spine Spine
24. Tier 2
Leaf
Broadcom Tomahawk 32x 100G
Non-Blocking Topology:
32x downlinks of 50G to serve 32 ToR
32x uplinks of 50G to provide 1:1 Over-subscription
2
Project Falco
ToR
Server
Leaf
Spine Spine
Leaf Leaf Leaf
Spine Spine
25. Tier 3
Spine
Broadcom Tomahawk 32x 100G
Non-Blocking Topology:
64 downlinks to provide 1:1 Over-subscription
To serve 64 pods (each pod 32 ToR)
100,000 Servers: Each pod (Approximately 1550 Compute)
2
Project Falco
ToR
Server
Leaf
Spine Spine
Leaf Leaf Leaf
Spine Spine
29. Where do we put the Firewall in this architecture?
• Since we’ve scaled the network horizontally, there’s no “choke point” like we had with the vertical
network architecture
• We want to be able to mix / match security zones in the same rack to maximize space / power
• We want to have a customized security profile, down to the per server or container (network
namespace) that is unique for the deployed applications.
• Reject any requests from less trusted zones to hitting anything in PROD by default without defined
ACLs.
31. What is DFW?
• Software Defined Networking (SDN)
• The applications deployed to the machine / container create a unique security profile.
• Deny incoming by default. Allow all loopback. Allow all outbound.
• Whitelist incoming application ports to accept connections from the same security zone
• Cross security zone communication requires human created ACLs based on our Topology application
deployment system
• As deployment actions happen across the datacenter, host based firewalls detect these conditions and
update their rulesets accordingly.
• The underlying firewall implementation is irrelevant
• Currently using ipfilter (iptables) and nftables on Linux, but could expand to ipf, pf, Windows, etc.
32. Advantages of DFW
• Fully distributed. More network I/O throughput, CPU horsepower, scales linearly.
• Datacenter space is fully utilized and physical network flattened. Logical network is quite
different.
• The VLANs the top-of-rack switch exposes determine the security zones the attached
machines belong to, not the massive vertical network cluster. Multiple security zones co-
located in the same rack. New security zones trivial to create.
• Only expose the network ports defined in our CMDB application deployment system
• Further limit accessibility to the network ports via upstream consumers by consuming the
application call graph.
• Able to canary / ramp ACL changes down to the per host or container, no big bang
modifications required.
33. Advantages of DFW
• Each node contains a small subset of rules vs. the CRT
network firewall containing tens of thousands.
• Authorized users can modify the firewall on-demand
without disabling it.
• Communicate keep-alive executions and notify if a
machine stops executing DFW. (hardware failure,
etc.)
• ACL complexity is localized to the service that
requires it.
34. New Business Capabilities
• Pre-security zone. Functionality that only host based firewalls could provide:
• Blackhole: Take an application listening on port 11016 from taking any traffic, or block
specific upstream consumers.
• QoS: sshd and Zookeeper network traffic should get priority over Apache Kafka network
I/O
• Pinhole: Based on the callgraph, only allow upstream consumers to access my
application on port 11016
35. New Business Capabilities
• Decommission datacenters in a controlled manner
• Allow authorized users to keep applications online, with DFW rejecting all inbound /
outbound application traffic. Allow SSH / sudo / infrastructure services to stay online.
• Conntrackd data exposed
• IPv6 support comes for free!
• Using ipset list:sets, every rule in DFW is written referencing the IPv4 and IPv6 ip
addresses / netblocks in parallel. As the company shifts from IPv4 to IPv6 and new
AAAA records come online, DFW automatically inserts these addresses and the
firewalls permit the IPv6 traffic.
36. ACLDB
• Centralized database that feeds sources
of truth by scraping CMDB and delivers
JSON data containers to each machine.
• JSON containers land on machines via
automated file transfers
• Intra-security zone communication
(What can communicate inside PROD?)
• Inter-security zone communication (What
is allowed to reach into PROD?)
37. High Level Architecture
• Only inbound traffic is filtered. All loopback / outbound traffic will always be immediately
passed.
• Network security will be enforced by filtering inbound traffic at the known destination.
• DFW rejects traffic, we do not drop traffic. The source host knows that its been rejected with
a ICMP port unreachable event.
• Build safeguards. Don’t firewall off 30k machines and become unrecoverable without pulling
power to the whole datacenter.
38. High Level Architecture
• Pre-security zone: Functionalities referenced on “new business capabilities slide”
• Security zone: Mimic the existing network firewalls, allowing PROD PROD
communication. Rules are written as “accept from any” as we jump into a new
iptables chain once the source machine resides in PROD netblocks.
• Post security zone: Inter-security zone rules maintained in ACLDB. “Allow 5x
machines in ZONE1 to hit 10x machines in PROD…
• The rules placed in /etc/sysconfig/iptables and /etc/sysconfig/ip6tables are identical, since
they reference list:set ipsets, which in turn reference the necessary IPv4 and IPv6 sub-ipsets.
39. DFW is stateless. Precompute the ruleset, every execution
• Every execution of DFW builds the iptables / ipset configuration from scratch, compares to
live state in the kernel
• Current state of iptables / ipsets does not matter.
• Users could flush ruleset, reboot, add or delete entries, destroy or create ipsets. We
use auditd to monitor for setsockopt() system calls for unexpected rule insertions.
• Next execution of DFW, we converge from whatever current state is to the intended state
either scheduled or on discovery of setsockopt() calls.
• Debugging is simple. Firewall issues after DFW execution is not from a “previous state
issue.” Current state needs a behavior change for things to work.
• Whitelist network ports, is the source machine connecting to me in my security zone, or do
I need to add a rule in ACLDB to permit the traffic?
40. Work with the humans, not against them
• Since automation is constantly enforcing its known good state, we need to plan for emergency
situations where Authorized Users has to modify the firewall on demand
• Example 1: Authorized users needs to whitelist a network port ASAP to stop an outage
• Authorized user adds a destination network port to a specific ipset, which immediately starts whitelisting that
traffic within the same security zone (PROD PROD port 9000). Allows time to register the network port
with the application in our CMDB application deployment system. DFW cleans this ipset automatically.
• Example 2: Authorized users wants to blackhole an application without stopping / shutting it down
• Shutting down an application corrupts memory state, which could be useful for developers to debug. Adding
destination port 9000 into this ipset allows the application to remain online, but reject all incoming requests.
• Example 3: Deployment actions
• Chicken and egg – DFW depends on application deployment system to determine mapping to servers. At
deployment time, a ipset gets modified to immediately whitelist the traffic. DFW cleans this ipset
IPTABLES RULES REFERENCE TYPICALLY EMPTY IPSETS, EXPECTING HUMAN
INPUT.
41. References:
• Altair Network Design: https://www.slideshare.net/shawnzandi/linkedin-openfabric-project-
interop-2017
• Eng blog post on Altair: https://engineering.linkedin.com/blog/2016/03/project-altair--the-
evolution-of-linkedins-data-center-network
• Programmable Data Center: https://engineering.linkedin.com/blog/2017/03/linkedin_s-approach-
to-a-self-defined-programmable-data-center
• Facebook’s Spine and Leaf: https://code.facebook.com/posts/360346274145943/introducing-
data-center-fabric-the-next-generation-facebook-data-center-network/
• Facebook’s Spine and Leaf: https://www.youtube.com/watch?v=mLEawo6OzFM
• Milton from Office Space: http://www.imdb.com/title/tt0151804/
42. Q/A session
Production ready implementation / Demo the technology:
Zener: https://www.zener.io/lisa17
BOF: Distributed, Software Defined Security in the Modern Data Center
Thursday, November 2, 9:00 pm–10:00 pm, Marina Room
LinkedIn: https://www.linkedin.com/in/mikesvoboda
Editor's Notes
This doesn’t scale, and its just one application out of thousands.
A front end application hacked, granting direct network access to backend databases or other application level APIs that pulled from the backend databases?
Could isolation of those middle tier API applications or backend databases prevented identity theft?
How many thousands / millions of times have there been data leaks at organizations
Code is often written just well enough to become operational, or address scaling issues. Security considerations can be a second or third (or lower priority)
Script kiddies can access your internal application resources, what can state sponsored hackers do? What can motivated organizations with $$$ do? The highly capable?
Network firewall, on the CRT, is a bottleneck for traffic coming in / out of the datacenter core.
All firewall ACLs have to be processed at the CRT
Central single point of failure. Failover to secondary CRT highly impacting to production traffic.
Error in ACL promotion can affect thousands of machines behind the CRT!
Each new datacenter facility or “cluster” could trigger thousands of ACL updates on other CRTs.
Power, rack space, cooling, and network is a lot more expensive than the actual machine using it!!! We are wasting millions of dollars in underutilized resources!
. We are abandoning space in the datacenter because we only have space for one or two additional cabinets, or have maxed out power / cooling.
Four parallel fabrics each cabinet/rack connects to all for planes thru its 4 leaf switches. They are color coded to understand the connection better.
designed by Charles Clos in 1952, which represents multi-stage telephone switching systems.
and we chose to build our network on top of merchant silicon. a very common strategy for mega scaled data center.
and we chose to build our network on top of merchant silicon. a very common strategy for mega scaled data center.
and we chose to build our network on top of merchant silicon. a very common strategy for mega scaled data center.
Machines from ZONE1, ZONE2, and PROD could all co-exist in the same physical rack, connected to the same TOR
Only allow frontend webservers to hit midtier application servers on port 9000, reject all other requests from PROD.
Only expose network paths to the APIs your applications provide to the upstream consumers!
Users / hackers can’t spin up netcat or sshd and start listening on some random port. Applications are not exposed to the network until their defined network ports are registered into CMDB.
Keeps DEV honest – new traffic flows can’t be introduced unbeknown to Operations without registration.
We wont block legitimate operational tasks.
Easily auditable
Packets destined for other applications to not have to traverse irrelevant ACLs in network TCAM tables.
Restricts access to an application inside the security zone (No open PROD communication)
Each application can create a unique security profile. We aren’t restricted to large concepts like “PROD” security zones or “DMZs”.
Permits immediate rollback in case of unexpected service shutdowns.
When machines remain on the network, we retain automation control and auditability.
Enhance the callgraph, and monitor incoming connections via conntrackd. No longer limited to expensive / unreliable netstat –an snapshots.
Some data is shared across the entire security zone (Allow bastion hosts to ssh into PROD), others are the unique attributes per machine.
Allow machines in ZONE1 access to hit Voldemort in PROD, allow desktops access to SSH to hosts in ZONE2.
Debugging DFW rejections will always happen on the destination node. If we filtered outbound traffic, it becomes too complex to debug rejection events.
Application X Application Y isn’t working. Application Y doesn’t see inbound traffic. I connect to machine Y, not knowing which machines are supposed to send data to it… Where is my traffic being dropped? Somewhere out in PROD? Application X could be hundreds or thousands of machines.
Debugging Application X Application Y becomes simple. There are only two rejection reasons.
The source host in Application X doesn’t reside in my security zone.
The network port Application Y uses hasn’t been registered with Topology to whitelist the incoming traffic.
Ipsets contain the “why” we accepted or rejected traffic. The rules in the iptables file is the ”high level objective” for what we are trying to achieve.
99.999% of changes are made in the ipsets, not iptables rules. As machines move in / out of applications or netblocks update, IPv6 comes online, ipset membership changes automatically. The IPv6 support simplifies IPv6 migration so we don’t have to burn network firewall TCAM memory space in silicon, duplicating the existing IPv4 ruleset.
Creates “temporary ipsets” and uses ”ipset swap <foo> <tmp_foo>” to promote membership change if not identical. (Adding or removing specific entries not calculated)
Ipset swap is atomic. Adding 100x new ip addresses, ports, or netblocks to the firewall is an all-or-nothing operation at one instant.
Most change in DFW happens with ipset membership changing, not iptables template file expansion changing.
CFEngine template expands iptables / ip6tables and executes iptables-restore < /etc/sysconfig/iptables to enforce in-memory state remains what we expanded.