So this is the interactive analysis 1.0 which is spark-shell. I think most of you are familiar with spark-shell if you have some experience of spark. Spark-shell provides a nice interactive environment for coding spark program. But it lacks lots of features for interactive analysis like visualization and code management.
So this is why zeppelin comes. Zeppelin brings lots of nice features to interactive analysis, such as visualization, collaboration and etc. We can call it interactive analysis 2.0. By default Zeppelin use the native spark interpreter which has some limitation, such as not able to run spark in yarn-cluster mode. That means your driver runs on your client machine which may bring heavy pressure on your client machine. And besides that you can’t share spark context across multiple zeppelin instance.
So now we leverage Livy as the spark interpreter of zeppelin. With Livy we can run spark interpreter in yarn cluster, and we can also share spark context across multiple zeppelin instance. We call zeppelin + livy as interactive analysis 3.0
So what is livy ? Livy is an open source REST interface for interacting with Spark from anywhere.
Here’s the diagram of the overall architecture of livy. There’s 3 layers, on the most left is the livy client and in the middle is livy server. Livy client commutate with livy server by using rest api. That means they communicate with Http protocol. Livy client can ask livy server to do lots of things, such as launching spark application, submit spark job,pull the job status and even submit one piece of spark code. There’s 2 kinds of spark session that livy support now. One is spark interactive session, another is spark batch session. Since today’s talk is about interactive analysis, so we will focus on the spark interactive session. In livy 0.1 the communication between livy server and spark session is Http, while in the latest code it is changed to RPC
So overall livy is a central place to launching spark jobs. It brings the several benefits for us.
First, It reduce the pressure on client machine. Nothing will run on the client machine except calling rest api Second, It makes the job submission/monitoring easy. Without livy you have to install spark on your client machine and use spark-submit to submit spark jobs. While in livy you just need to call the rest api The next is that you can customize the job schedule. Since all the job submission is through the livy-server, livy-server can do the job scheduling (This feature is not implemented yet, but it is possible)
Now let’s talk about how livy works for the interactive session
First we will talk about how livy create session. Before you submit any piece of code, you need to create session.
Here we use the curl command to invoke the rest api. This is a POST request, and we specify the kind as spark, it can also be pyspark/sparkr, and we also need to specify the url of the rest api And this is the response we get. The response contains the state of the session, here it is starting, the proxyUser is null,
Now let’s see how that request is routed. First livy client send request to livy server Then livy server will launch the session After the spark session session is created, it will send back its address to livy server, so that they can establish connection between livy server and spark session And finally livy server will send back the session status to livy client.
Now let’s see how livy execute code
Here’s the request we send, it contains the code that we want to execute and we also need to specify the rest api url. And here’s the response which contains the statement id, state, and output. Here we notice that the output is null, because this piece of code won’t finish in in short time, but we can get the output by calling another pull job status request.
Now let’s see how this request is routed
First livy client send request to livy server Livy server will forward the request to its spark session Spark session will execute the code and send back output to livy server Finally Livy server will send back output to livy client
Now let’s talk about the SparkContext sharing
Because clients don’t own the spark session, all the spark sessions are launched by livy server. So that makes the spark context sharing possible.
Here we can see that client-1 and client-2 use the same spark session ( session-1). While client-3 use its own session (session-2) When the client interact with the livy server, he need to specify the session id, so as long as they specify the same session id, they are using the same spark context. Of course this is for non-secure mode, it is more complicated for secure mode.
Now let’s talk about the security.
Mainly there’s 3 secure problems we need to solve.
First we need to make sure that only authorized users launch spark session. We don’t want everyone to launch spark session through livy server Second is that each user can access its own session. Third is only livy server can submit job securely to spark session
To resolve these 3 problems we use several technics: spengo, impersonation and shared secret. I will talk about them one by one
Spengo is used between livy client and livy server, it can make sure that only authroized users can launch spark session /submit code Impersonation is used to for make sure each user can access his own session. Without impersonation, all the spark session is launched as the user who launch the livy server process, but with impernation, the spark session is launched as the user in the client And the shared secret is used to protect the communication between livy server and spark session, only livy server and spark session know the shared secret
First let’s talk about spengo.
Spengo can make sure that only authorized user can launch spark session / submit code to livy server.
The full name of spnego is Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO) It is a GSSAPI "pseudo mechanism" used by client-server software to negotiate the choice of security technology. So it is pluggable with the underlying security technology, but most of often it is used with kerbrose.
Now let see how that works. First the client will send the request to server Then the server will repponse with status code 401 which means unauthorized And then the client will send the request to server again, but this time it will put the kerborse service ticket information to the request Finally the server will authrozie the user with the ticket info and response with content of the page.
The next thing is impersonation
We want to protect each user’s session.
We don’t want user Alice to access user bob’s session for security reason. The livy server process is launched by super user livy. Without impersonation all the spark session is launched as user livy, but with impersonation, the spark session can be launched as user of the client.
This is very similar to the impersonation in hive server 2. So to enable this impersonation, we need to make the following configuration changes in core-site.xml
The next thing we will talk about is the the share sceret.
Once the spark session is started, it can accept request from outside, but we don’t want anyone to connect with the spark session except the livy server
So here we use the shared scret to protect the communication between livy server and spark session. Only the livy server and spark session know the shared secret.
Now let’s see how that works.
Livy Server will generate secret key Livy Server pass secret key to spark session when launching spark Session Then they will use the secret key to communicate with each other
Apache Zeppelin + LIvy: Bringing Multi Tenancy to Interactive Data Analysis