Apache Spark on K8s +
HDFS Security
Ilan Filonenko (ifilonenko@bloomberg.net)
Agenda
1. Kubernetes intro
2. Big Data on Kubernetes
3. Demo: Spark on K8s accessing secure HDFS
4. Secure HDFS deep dive
5. HDFS running on K8s
6. Data locality deep dive
Kubernetes
“New” open-source cluster manager.
- github.com/kubernetes/kubernetes
libs
app
kernel
libs
app
libs
app
libs
app
Runs programs in Linux containers.
1600+ contributors and 60,000+ commits.
“My app was running fine
until someone installed
their software”
- Jane Doe, Sr. Dev
DON’T
TOUCH
MY
STUFF
More isolation is good
Kubernetes provides each program with:
● a lightweight virtual file system -- Docker image
○ an independent set of S/W packages
● a virtual network interface
○ a unique virtual IP address
○ an entire range of ports
Other isolation layers
● Separate process ID space
● Max memory limit
● CPU share throttling
● Mountable volumes
○ Config files -- ConfigMaps
○ Credentials -- Secrets
○ Local storages -- EmptyDir, HostPath
○ Network storages -- PersistentVolumes
Kubernetes architecture
node A node B
Pod 1 Pod 2 Pod 3
10.0.0.2
196.0.0.5 196.0.0.6
10.0.0.3 10.0.1.2
Pod, a unit of scheduling and isolation.
● runs a user program in a primary container
● holds isolation layers like a virtual IP in an infra container
Big Data on Kubernetes
Since Spark 2.3, the community has added features:
● non-JVM binding support and memory customization
● client-mode support for running interactive apps
● large framework refactors: rm init-container; scheduler
Talk: https://conferences.oreilly.com/strata/strata-
ca/public/schedule/detail/63855
Kerberos work: https://github.com/apache/spark/pull/21669
Spark on Kubernetes
Spark Core Kubernetes Scheduler Backend
Kubernetes Clusternew executors
remove executors
configuration
• Resource Requests
• Authnz
• Communication with K8s
Spark on Kubernetes
node A node B
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.2
196.0.0.5 196.0.0.6
10.0.0.3 10.0.1.2
Client
Client
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.4 10.0.0.5 10.0.1.3
Job 1
Job 2
What about storage?
Spark on Kubernetes supports cloud storages like S3.
Your data is often stored on HDFS:
node A
node B
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.2
196.0.0.5 196.0.0.6
10.0.0.3 10.0.1.2
Namenode Datanode 1 Datanode 2
● Access remote HDFS running outside Kubernetes
● Run HDFS itself on Kubernetes -- github.com/apache-spark-on-k8s/kubernetes-HDFS
○ HDFS Operator
node A
node B
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.2
196.0.0.5 196.0.0.6
10.0.0.3 10.0.1.2
Namenode Datanode 1 Datanode 2
Kerberos
Agenda
1. Kubernetes intro
2. Big Data on Kubernetes
3. Demo: Spark on K8s accessing secure HDFS
4. Secure HDFS deep dive
5. HDFS running on K8s
6. Data locality deep dive
Demo: Spark k8s Accessing Secure HDFS
Running a Spark Job on Kubernetes accessing Secure HDFS
Single-noded pseudo-distributed Kerberized Hadoop Cluster
https://github.com/ifilonenko/hadoop-kerberos-helm
Spark Submit with Kerberos Configs
https://github.com/ifilonenko/secure-hdfs-test
Keytab and $kinit
https://asciinema.org/a/2vIJdw1N53Lo7LoSR09OMKdRH
Security deep dive
● Kerberos tickets
● HDFS tokens
● Long running jobs
● Access Control of Secrets
User A
encrypted with session key SK1
encrypted with HDFS’ password
encrypted with A’s password
Session 1 Requests/Responses
Kerberos
Server
A’s password
HDFS’ password
HDFS’ password
I’m user A. May I talk to HDFS?
SK1 copy for HDFS
SK1 copy for User A
SK1 copy for HDFS
Ticket to HDFS
Kerberos, simplified
SK1
You guys should talk only if the
other side knows SK1.
I’ll get SK1 to each of you secretly.
I guarantee that the other side is
genuine if they know SK1.
Order # SK1
Customer copy
Order # SK1
Merchant copy
SK1 SK1
HDFS Delegation Token
Kerberos ticket, no good for executors on cluster nodes.
● Stamped with the client IP.
Give tokens to driver and executors instead.
● Issued by namenode only if the client has a valid
Kerberos ticket.
● No client IP stamped.
● Permit for driver and executors to use HDFS on
your behalf across all cluster nodes.
Solved: Share tokens via K8s Secret
node A
node B
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.2
196.0.0.5 196.0.0.6
10.0.0.3 10.0.1.2
Client
Namenode Datanode 1 Datanode 2
Secret 1
Kerberos
Problem: Driver & executors need token
ADMIT
USER
Solved: Refresh tokens with K8s microservice
node A node B
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.2
196.0.0.5 196.0.0.6
10.0.0.3 10.0.1.2
Client
Namenode Datanode 1 Datanode 2
Refresh Pod
10.0.0.8
Secret 1
Kerberos
Problem: Tokens expire
ADMIT
SERVER
Solved: Keep Secret to yourself with K8s RBAC
node A node B
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.2
196.0.0.5 196.0.0.6
10.0.0.3 10.0.1.2
Client
Client
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.4 10.0.0.5 10.0.1.3
Secret 1
Secret 1
Job 1
Job 2
Problem: Secrets can be exposed to others
Access Control of Secrets
HDFS DTs and renewal service keytab in Secrets
Job
owner
human
user
Job
owner’s
pods
Other
human
users
Other
users’
pods
Renew
service
pods
Access
to the
DT
secret
create get none none get,
update
Access
to the
renewal
keytab
secret
none none none none get
Admin can restrict access by:
1. Per-user AC, manual
2. Per-group AC, manual
3. Per-user AC (automated, upcoming)
Demo: Spark k8s Accessing Secure HDFS
Running a Spark Job on Kubernetes accessing Secure HDFS
Single-noded pseudo-distributed Kerberized Hadoop Cluster
https://github.com/ifilonenko/hadoop-kerberos-helm
Spark Submit with Kerberos Configs
https://github.com/ifilonenko/secure-hdfs-test
Pre-defined Secrets
https://asciinema.org/a/6YzzS6cP392iO3PnVo07yhHYk
Agenda
1. Kubernetes intro
2. Big Data on Kubernetes
3. Demo: Spark on K8s accessing secure HDFS
4. Secure HDFS deep dive
5. HDFS running on K8s
6. Data locality deep dive
node A
node B
196.0.0.5 196.0.0.6
Namenode Datanode 1
node A
node B
Driver Pod Executor Pod 1 Executor Pod 2
10.0.0.2
196.0.0.5 196.0.0.6
10.0.0.3 10.0.1.2
Namenode Datanode 1 Datanode 2
Run HDFS itself on Kubernetes
node A node C
Driver Pod Executor Pod 2
10.0.0.2
196.0.0.5 196.0.0.7
10.0.1.2
Client
Spark
Namenode Pod 1
Datanode Pod 1 Datanode Pod 3
HDFS
HostPath HostPath
github.com/apache-spark-on-k8s/kubernetes-HDFS
196.0.0.6
Executor Pod 1
10.0.0.3
Datanode Pod 2
HostPath
Namenode Pod 2
node B
Persistent
volume 1
Persistent
volume 2
ZK
Pod 1
ZK
Pod 2
JN
Pod 1
ZK
Pod 3
JN
Pod 2
JN
Pod 3
Zookeeper
Journal node
Kerberos
StatefulSet
DaemonSet
active standby anti pod affinity
Locality deep dive
Send compute to data
● Node locality
● Rack locality
● Where to launch executors
Spark on K8s had to be fixed
Executor 2
node B
Executor 1
node A
Datanode 1 Datanode 2
SLOWFAST
Problem: Node locality broken with virtual pod IPs
Executor Pod 2
10.0.1.2
Driver Executor Pod 1
10.0.0.2 10.0.0.3
Location of fileA == Location of Executor 1
Read /fileA
Read /fileB
/fileA /fileB
node A
196.0.0.5
node B
196.0.0.6
Datanode Pod 1 Datanode Pod 2Namenode Pod
(/fileA → Datanode 1 → 196.0.0.5) == Location of Executor 1(/fileA → Datanode 1 → 196.0.0.5) != (Executor 1 →10.0.0.3)(/fileA → Datanode 1 → 196.0.0.5) == (Executor 1 →10.0.0.3 → 196.0.0.5)
Solved: Node locality
Problem: Rack locality broken with virtual pod IPs
Executor Pod 1
10.0.1.2
Driver
10.0.0.2
Read /fileA
/fileA
node A
196.0.0.5
node B
196.0.0.6
Datanode Pod 1 Datanode Pod 2
(/fileA → Datanode 1 → 196.0.0.5 → Rack 1) != (Executor 1 →10.0.1.2)
Executor Pod 2
10.0.2.2
Read /fileB
/fileB
node C
196.0.1.5
Datanode Pod 3
Rack 1 Rack 2
Rack of fileA == Rack of Executor 1(/fileA → Datanode 1 → 196.0.0.5 → Rack 1) == (Executor 1 →10.0.1.2 → 196.0.0.6 → Rack 1)
SLOW
Solved: Rack locality
Solved: Node preference
Hey K8s, I’d like node A much more for my executors
Driver Executor Pod 1
10.0.0.2 10.0.0.3
/fileA
node A
196.0.0.5
node B
196.0.0.6
Datanode Pod 1 Datanode Pod 2/fileB
Executor Pod 2
10.0.0.4
Node affinity
Rescued data locality!
with data locality fix
- duration: 10 minutes
without data locality fix
- duration: 25 minutes
Thank you!
Ilan Filonenko (ifilonenko@bloomberg.net)

Apache Spark on K8S and HDFS Security with Ilan Flonenko

  • 1.
    Apache Spark onK8s + HDFS Security Ilan Filonenko (ifilonenko@bloomberg.net)
  • 2.
    Agenda 1. Kubernetes intro 2.Big Data on Kubernetes 3. Demo: Spark on K8s accessing secure HDFS 4. Secure HDFS deep dive 5. HDFS running on K8s 6. Data locality deep dive
  • 3.
    Kubernetes “New” open-source clustermanager. - github.com/kubernetes/kubernetes libs app kernel libs app libs app libs app Runs programs in Linux containers. 1600+ contributors and 60,000+ commits.
  • 4.
    “My app wasrunning fine until someone installed their software” - Jane Doe, Sr. Dev DON’T TOUCH MY STUFF
  • 5.
    More isolation isgood Kubernetes provides each program with: ● a lightweight virtual file system -- Docker image ○ an independent set of S/W packages ● a virtual network interface ○ a unique virtual IP address ○ an entire range of ports
  • 6.
    Other isolation layers ●Separate process ID space ● Max memory limit ● CPU share throttling ● Mountable volumes ○ Config files -- ConfigMaps ○ Credentials -- Secrets ○ Local storages -- EmptyDir, HostPath ○ Network storages -- PersistentVolumes
  • 7.
    Kubernetes architecture node Anode B Pod 1 Pod 2 Pod 3 10.0.0.2 196.0.0.5 196.0.0.6 10.0.0.3 10.0.1.2 Pod, a unit of scheduling and isolation. ● runs a user program in a primary container ● holds isolation layers like a virtual IP in an infra container
  • 8.
    Big Data onKubernetes Since Spark 2.3, the community has added features: ● non-JVM binding support and memory customization ● client-mode support for running interactive apps ● large framework refactors: rm init-container; scheduler Talk: https://conferences.oreilly.com/strata/strata- ca/public/schedule/detail/63855 Kerberos work: https://github.com/apache/spark/pull/21669
  • 9.
    Spark on Kubernetes SparkCore Kubernetes Scheduler Backend Kubernetes Clusternew executors remove executors configuration • Resource Requests • Authnz • Communication with K8s
  • 10.
    Spark on Kubernetes nodeA node B Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.2 196.0.0.5 196.0.0.6 10.0.0.3 10.0.1.2 Client Client Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.4 10.0.0.5 10.0.1.3 Job 1 Job 2
  • 11.
    What about storage? Sparkon Kubernetes supports cloud storages like S3. Your data is often stored on HDFS: node A node B Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.2 196.0.0.5 196.0.0.6 10.0.0.3 10.0.1.2 Namenode Datanode 1 Datanode 2 ● Access remote HDFS running outside Kubernetes ● Run HDFS itself on Kubernetes -- github.com/apache-spark-on-k8s/kubernetes-HDFS ○ HDFS Operator node A node B Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.2 196.0.0.5 196.0.0.6 10.0.0.3 10.0.1.2 Namenode Datanode 1 Datanode 2 Kerberos
  • 12.
    Agenda 1. Kubernetes intro 2.Big Data on Kubernetes 3. Demo: Spark on K8s accessing secure HDFS 4. Secure HDFS deep dive 5. HDFS running on K8s 6. Data locality deep dive
  • 13.
    Demo: Spark k8sAccessing Secure HDFS Running a Spark Job on Kubernetes accessing Secure HDFS Single-noded pseudo-distributed Kerberized Hadoop Cluster https://github.com/ifilonenko/hadoop-kerberos-helm Spark Submit with Kerberos Configs https://github.com/ifilonenko/secure-hdfs-test Keytab and $kinit https://asciinema.org/a/2vIJdw1N53Lo7LoSR09OMKdRH
  • 14.
    Security deep dive ●Kerberos tickets ● HDFS tokens ● Long running jobs ● Access Control of Secrets
  • 15.
    User A encrypted withsession key SK1 encrypted with HDFS’ password encrypted with A’s password Session 1 Requests/Responses Kerberos Server A’s password HDFS’ password HDFS’ password I’m user A. May I talk to HDFS? SK1 copy for HDFS SK1 copy for User A SK1 copy for HDFS Ticket to HDFS Kerberos, simplified SK1 You guys should talk only if the other side knows SK1. I’ll get SK1 to each of you secretly. I guarantee that the other side is genuine if they know SK1. Order # SK1 Customer copy Order # SK1 Merchant copy SK1 SK1
  • 16.
    HDFS Delegation Token Kerberosticket, no good for executors on cluster nodes. ● Stamped with the client IP. Give tokens to driver and executors instead. ● Issued by namenode only if the client has a valid Kerberos ticket. ● No client IP stamped. ● Permit for driver and executors to use HDFS on your behalf across all cluster nodes.
  • 17.
    Solved: Share tokensvia K8s Secret node A node B Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.2 196.0.0.5 196.0.0.6 10.0.0.3 10.0.1.2 Client Namenode Datanode 1 Datanode 2 Secret 1 Kerberos Problem: Driver & executors need token ADMIT USER
  • 18.
    Solved: Refresh tokenswith K8s microservice node A node B Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.2 196.0.0.5 196.0.0.6 10.0.0.3 10.0.1.2 Client Namenode Datanode 1 Datanode 2 Refresh Pod 10.0.0.8 Secret 1 Kerberos Problem: Tokens expire ADMIT SERVER
  • 19.
    Solved: Keep Secretto yourself with K8s RBAC node A node B Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.2 196.0.0.5 196.0.0.6 10.0.0.3 10.0.1.2 Client Client Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.4 10.0.0.5 10.0.1.3 Secret 1 Secret 1 Job 1 Job 2 Problem: Secrets can be exposed to others
  • 20.
    Access Control ofSecrets HDFS DTs and renewal service keytab in Secrets Job owner human user Job owner’s pods Other human users Other users’ pods Renew service pods Access to the DT secret create get none none get, update Access to the renewal keytab secret none none none none get Admin can restrict access by: 1. Per-user AC, manual 2. Per-group AC, manual 3. Per-user AC (automated, upcoming)
  • 21.
    Demo: Spark k8sAccessing Secure HDFS Running a Spark Job on Kubernetes accessing Secure HDFS Single-noded pseudo-distributed Kerberized Hadoop Cluster https://github.com/ifilonenko/hadoop-kerberos-helm Spark Submit with Kerberos Configs https://github.com/ifilonenko/secure-hdfs-test Pre-defined Secrets https://asciinema.org/a/6YzzS6cP392iO3PnVo07yhHYk
  • 22.
    Agenda 1. Kubernetes intro 2.Big Data on Kubernetes 3. Demo: Spark on K8s accessing secure HDFS 4. Secure HDFS deep dive 5. HDFS running on K8s 6. Data locality deep dive node A node B 196.0.0.5 196.0.0.6 Namenode Datanode 1 node A node B Driver Pod Executor Pod 1 Executor Pod 2 10.0.0.2 196.0.0.5 196.0.0.6 10.0.0.3 10.0.1.2 Namenode Datanode 1 Datanode 2
  • 23.
    Run HDFS itselfon Kubernetes node A node C Driver Pod Executor Pod 2 10.0.0.2 196.0.0.5 196.0.0.7 10.0.1.2 Client Spark Namenode Pod 1 Datanode Pod 1 Datanode Pod 3 HDFS HostPath HostPath github.com/apache-spark-on-k8s/kubernetes-HDFS 196.0.0.6 Executor Pod 1 10.0.0.3 Datanode Pod 2 HostPath Namenode Pod 2 node B Persistent volume 1 Persistent volume 2 ZK Pod 1 ZK Pod 2 JN Pod 1 ZK Pod 3 JN Pod 2 JN Pod 3 Zookeeper Journal node Kerberos StatefulSet DaemonSet active standby anti pod affinity
  • 24.
    Locality deep dive Sendcompute to data ● Node locality ● Rack locality ● Where to launch executors Spark on K8s had to be fixed Executor 2 node B Executor 1 node A Datanode 1 Datanode 2 SLOWFAST
  • 25.
    Problem: Node localitybroken with virtual pod IPs Executor Pod 2 10.0.1.2 Driver Executor Pod 1 10.0.0.2 10.0.0.3 Location of fileA == Location of Executor 1 Read /fileA Read /fileB /fileA /fileB node A 196.0.0.5 node B 196.0.0.6 Datanode Pod 1 Datanode Pod 2Namenode Pod (/fileA → Datanode 1 → 196.0.0.5) == Location of Executor 1(/fileA → Datanode 1 → 196.0.0.5) != (Executor 1 →10.0.0.3)(/fileA → Datanode 1 → 196.0.0.5) == (Executor 1 →10.0.0.3 → 196.0.0.5) Solved: Node locality
  • 26.
    Problem: Rack localitybroken with virtual pod IPs Executor Pod 1 10.0.1.2 Driver 10.0.0.2 Read /fileA /fileA node A 196.0.0.5 node B 196.0.0.6 Datanode Pod 1 Datanode Pod 2 (/fileA → Datanode 1 → 196.0.0.5 → Rack 1) != (Executor 1 →10.0.1.2) Executor Pod 2 10.0.2.2 Read /fileB /fileB node C 196.0.1.5 Datanode Pod 3 Rack 1 Rack 2 Rack of fileA == Rack of Executor 1(/fileA → Datanode 1 → 196.0.0.5 → Rack 1) == (Executor 1 →10.0.1.2 → 196.0.0.6 → Rack 1) SLOW Solved: Rack locality
  • 27.
    Solved: Node preference HeyK8s, I’d like node A much more for my executors Driver Executor Pod 1 10.0.0.2 10.0.0.3 /fileA node A 196.0.0.5 node B 196.0.0.6 Datanode Pod 1 Datanode Pod 2/fileB Executor Pod 2 10.0.0.4 Node affinity
  • 28.
    Rescued data locality! withdata locality fix - duration: 10 minutes without data locality fix - duration: 25 minutes
  • 29.
    Thank you! Ilan Filonenko(ifilonenko@bloomberg.net)