In this hands-on workshop, we cover the basics of locking down in-cluster network traffic using the new traffic policies introduced in Linkerd 2.11. Using Linkerd’s ability to authorize traffic based on workload identity, we cover a variety of practical use cases, including restricting access to a critical service, preventing traffic across namespaces, and locking down traffic while still allowing metrics scrapes, health checks, and other meta-traffic.
2. Hi, we're Buoyant!
We created Linkerd! And we help you run Linkerd by providing
management tools (Buoyant Cloud), support, training, and much
more.
At your service today:
★ William Morgan, CEO ( @wm)
★ Jason Morgan (not related!), MC ( @RJasonMorgan)
★ Lots of other friendly Buoyant folks in the Linkerd Slack.
Have questions or need help? Join the #workshops channel on
slack.linkerd.io and help each other!
3. Let's dive right in!
★ Linkerd 2.11 introduced a big new
feature: authorization policy.
★ This feature gives you control over the
types of communication are allowed on
your cluster.
★ It's built on top of mTLS identity and
enforced at the pod level (zero-trust
compatible).
4. But what do we mean by "authorization policy"?
★ By default, Kubernetes allows all communication to and from any pod.
★ By default, Linkerd also allows all communication to and from any
(meshed) pod.
★ Authorization policy refers to restricting some types of communication.
★ Called "authorization policy" because works by denying requests unless
they're properly authorized.
So authorization policy gives Linkerd the power to say "no".
5. What kinds of communication can be restricted?
Today, Linkerd's policies are purely server-side policies (enforced by the
inbound proxies) and authorize individual connections. This means they:
★ Can only restrict traffic to meshed pods.
★ Can only restricts connection, not individual requests.
This is just a first step! In the future (e.g. 2.12) we'll add:
★ Client-side policies (restrict traffic from meshed pods)
★ Fine-grained policy (verbs, paths, gRPC methods)
★ More!
6. Linkerd's authorization policies vs NetworkPolicies
Authorization Policies
★ Use workload identity (i.e.
ServiceAccount)
★ "Include" encryption
★ Enforced at the pod level
(zero trust)
★ Can capture L7 semantics
★ Are ergonomic
Network Policies
★ Use network identity (i.e. IP
address)
★ No encryption
★ Enforced at the CNI layer
★ No L7 semantics
★ Hard to use
7. How is authorization policy expressed?
Two mechanisms that work together:
★ A default policy, typically set through a
config.linkerd.io/default-inbound-policy annotation.
★ Two CRDs, Server and ServerAuthorization, that specify exceptions
to the default policy.
This brings the total number of Linkerd CRDs to 4. Sorry!
8. Default policies
★ Every cluster has a cluster-wide default policy, set at install time with
policyController.defaultAllowPolicy
○ By default: all-unauthenticated
★ The default policy can be overridden at the namespace or workload
level
○ Set the config.linkerd.io/default-inbound-policy annotation
★ Every proxy's default policy is fixed at startup time.
○ If you want to change its default policy, you need to restart the pod!
○ Can be viewed in the environment variables for the proxy container.
9. Available default policies
★ all-unauthenticated: allow all
★ cluster-unauthenticated: allow from clients with source IPs in the
cluster.
★ all-authenticated: allow from clients with Linkerd's mTLS
★ cluster-authenticated: allow from in-cluster clients with Linkerd's
mTLS
★ deny: deny all
10. A note about cluster networks
★ Kubernetes doesn't give us a great way of knowing what the actual
network IP range is
★ Linkerd just uses all private IP space by default
★ But in practice, you should probably restrict this to the cluster's actual
network space by setting the clusterNetworks variable at
install/upgrade time.
11. The Server CRD
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
namespace: emojivoto
name: voting-grpc
spec:
podSelector:
matchLabels:
app: voting-svc
port: voting-grpc
proxyProtocol: gRPC
★ Selects over a port, and a set of
pods, in a namespace
★ Give it a protocol hint and it you
can avoid protocol detection!
Example: the gRPC port on the
emojivoto voting service
12. Servers can match multiple workloads!
Example: the admin port on every
pod in this namespace
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
namespace: emojivoto
name: admin
spec:
port: linkerd-admin
podSelector:
matchLabels: {} # every pod
13. By themselves, Servers deny all traffic!
★ If you create a Server for a port, all traffic to that port will be denied.
○ This overrides the default policy.
★ If you want to allow traffic, you need to create a
ServerAuthorization that references that Server
14. The ServerAuthorization CRD
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
namespace: emojivoto
name: admin-unauthed
spec:
server:
name: admin
client:
unauthenticated: true
★ Selects over one or more
Servers
★ Describes the types of traffic
that are allowed to those
Servers
Example: unauthenticated traffic
to the "admin" Server is allowed
15. ServerAuthz's can match multiple Servers!
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
namespace: emojivoto
name: internal-grpc
spec:
server:
selector:
matchLabels:
emojivoto/api: internal-grpc
client:
meshTLS:
serviceAccounts:
- name: web
Example: traffic to any Server with
the "emojivoto/api" label is
allowed if it's mTLS traffic from the
"web" ServiceAccount
16. Putting it all together
So, when a connection comes to a port on a meshed pod, how does
Linkerd decide what to do? It uses this basic logic
Is the (pod, port) selected by a Server?
Yes => Is that Server selected by a ServerAuthorization?
○ Yes => Follow the ServerAuthorization's rules for that connection
○ No => deny the connection
No => Use the default policy for the pod
17. How does it feel to be rejected?
★ If Linkerd knows this is a gRPC connection
○ Denial is a grpc-status: PermissionDenied response
★ If Linkerd knows this is an HTTP/1 or HTTP/2 connection
○ Denial is a 403 response
★ Otherwise
○ Denial is a refused TCP connection
If you update your policies, Linkerd will happily terminate established
connections if they are no longer allowed!
18. Gotcha #1: Kubelet probes need to be authorized!
★ If you are building a "deny by default" setup, you need to make sure
Kubelet probes (liveness checks, readiness checks, health checks, etc) are
authorized!
○ Otherwise your pod won't start.
★ This also applies if you're building an "authenticated by default" setup.
Kubelet probes are plaintext / unauthenticated.
19. Gotcha #2: Default policies are not read dynamically!
★ The default policy for a pod is fixed at startup time, based on the
annotations then present in the namespace and workload.
★ ... with one edge case, which is that you can dynamically change the
cluster-wide default with linkerd update. Only works if no annotations
are overriding it.
The Server and ServerAuthorization CRs are, of course, read dynamically.
20. Gotcha #3: Ports must be in the pod spec!
If a Server references a port that is not in the pod spec, it will be ignored.
21. Hands-on time!
Let's take a look at how to get our Emojivoto app into a high security,
"deny by default" namespace.
(Based loosely on Go Directly to Namespace Jail by Linkerd maintainer
Alejandro Pedraza)
22. Next Workshop
A guide to end-to-end encryption with Emissary-ingress and Linkerd
Thu, Feb 17, 2022
9 am PST | 12 pm EST | 6 pm CET
Register today!
buoyant.io/register/end-to-end-encryption-with-emissary-and-linkerd
….and coming up in March: Certificate management for Linkerd
24. The best way to run
in mission-critical
environments
Request a demo
buoyant.io/demo
★ Automatically track data plane and control plane health
★ Manage mesh certificates and versions
★ Build the ultimate service mesh platform
★ Get full Linkerd support