You probably came to hear me talk about security, but security is boring. So instead, I’m going to talk about hungry hungry hippos. If you haven’t played hungry hungry hippos before, it’s a fairly simple game where four players compete to collect the most marbles. Now hungry hungry hippos is usually played with all white marbles. This makes sense because all players can collect all marbles. But this is boring, so today I’m going to shake things up and talk about a variant using multiple colors.
That’s more like it. So, the reason I want so much variety is I want to restrict which hippos can consume which colors of marbles. Why do I wan this additional restriction? Well, I’m the one giving the presentation so I get to just declare it by fiat.
Although seriously, do you want any of these yahoos to be able to collect any color of marble? Take the guy in the green, I never trust a man with a beard. Now that we’ve established that we want to limit access to certain color marbles, we can brainstorm a couple of different ways to implement this.
So lets start by sorting the marbles into groups with the same color. We can then control access to these groups of marbles by setting up magical boundaries that only specific hippos can actually penetrate. This system works well, but it’s not very granular. We have to pre-group the marbles that each hippo can access into it’s own magic box.
I’d much rather have magic marbles and magic hippos. Instead of having to first put the same colored marbles into the same magic box, now I can mix all the marbles together. Thanks to magic, each hippo will only grab the colors that they’re allowed. Any other marbles will pass right through them. This saves me a lot of time and it also allows me to invite more players. Now anyone can play and they’ll only ever collect the right kind of marble.
This is convenient because I’m very lazy. I hate to have to pre-sort things. It’s much easier for me to grab a handful of marbles and just throw them on the board and let the magic sort everything out.
At this point you may be asking what on earth does any of this have to do with Hadoop? Well, you’re asking that if you didn’t just walk out while I was up here rambling about hippos, marbles, and magic. If you think about how the usage of Hadoop has evolved, it started very much the same way as our hungry hungry hippos game. All of the marbles were the same color and any player could collect any marble. This was great when we were deploying Hadoop for a small set of users and we trusted every user with all of the data. But when something is useful, it inevitably leads to more adoption.
As more and more people show up to use our cluster, we have to think more and more carefully about who has access to what data. Before Hadoop had strong security controls, they implemented advisory authorization at the file and directory level. I say advisory because while permissions existed, Hadoop initially didn’t require that you prove you say you are. This helped you prevent mistakes, but didn’t stop a malicious user.
Hadoop solved this problem by adding support for Kerberos-based authentication. Now each user was given strong credentials that they could use to gain access to the system. Kerberos has become so synonmyous with Hadoop security that 90% of the time if someone says they configured or enabled hadoop security, they’re probably talking about turning on Kerberos authentication.
This is great. You could now check that each user was who they said they were. You still have a bit of a problem in that permissions only go down to the file level. That means that if I want control access to particular types of data, I have to merge all of the protected data into their own files and set permissions accordingly. This is especially annoying if I’m using a query language like Impala, Hive or Pig. I’m now trying access tables of records but I have to manage my security controls with very blunt tools.
Before I talk about how we’ve solved the granularity problem, I want to take a minute to talk about how Hadoop, and in particular MapReduce, implements process-based isolation. Before Hadoop had security, every job was executed using the hadoop or mapred user. This meant that even though you accessed files by default using the user identity that submitted the job, there was nothing that prevented you from peaking over your shoulder and looking at some of the output from another job since all of the intermediate data was protected by OS permissions and all jobs ran as the same OS user. Hadoop solved this problem by adding the ability to su to the user that submitted the job before executing the job process. This is very useful from a security perspective, but I bring it up because it’s something that often trips up new administrators that are deploying Hadoop for the first time. The TLDR version is that you must provision user accounts on every node in your cluster for any user that can run a MapReduce job. This is often done using LDAP or Active Directory so you don’t have to manage all the accounts by hand.
This meant that even though you accessed files by default using the user identity that submitted the job, there was nothing that prevented you from peaking over your shoulder and looking at some of the output from another job since all of the intermediate data was protected by OS permissions and all jobs ran as the same OS user. Hadoop solved this problem by adding the ability to su to the user that submitted the job before executing the job process. This is very useful from a security perspective, but I bring it up because it’s something that often trips up new administrators that are deploying Hadoop for the first time. The TLDR version is that you must provision user accounts on every node in your cluster for any user that can run a MapReduce job. This is often done using LDAP or Active Directory so you don’t have to manage all the accounts by hand.
This is essentially why Hadoop security is tougher than most other systems. Because the system starts with the ability to just run random code, you need to set up multiple barriers of protection to have a fully secured system.
Ok, so lets circle back into why we care about controlling data access with finer granularity. It all comes down to multitenancy. It’s cool to have a 100 node Hadoop cluster that serves all of the users in your department, but it’s event cooler to have a 1000 node Hadoop cluster that serves all of the users in your company. Because we want to share these large clusters to get good economies of scale, we need to come up with more creative ways to control access to data. Again. we could keep sorting data into files and directories and implementing all of our controls at the file-level, but that gets old fast.
One of the first efforts towards adding fine grained access control was a project called Apache Accumulo. Accumulo is similar to HBase in that it’s also based on Google’s BigTable design. One of the places Accumulo departed from the source material was to add an additional element to the key that provides a security label at the cell-level. This is very useful as it provides very fine-grain access, unfortunately the scanning speed of Accumulo and HBase isn’t as fast as scanning HDFS directly so they’re not as ideal for large batch workloads (hint: that’s what MapReduce was designed for).
HBase initially added security at the table and column level, but they saw how much fun Accumulo was having that they added a cell-level option in the latest release.
Not wanting to be left out of the party, the community has done some work to prove fine-grain access control to data stored in HDFS. The two most popular ways of doing that today is with Apache Sentry (Incubating) and Apache Hive’s new next generation authorization features. Sentry works by plugging into existing projects and adding RBAC from the outside. This is nice because it gives you a common way of controlling access to data across the different file format and processing engines. Today, Sentry supports Hive, Impala and Apache Solr. Technically, Sentry only provides access control down to the view level, but you can simulate column and even row-level access by creating views that have a subset of columns or that filter rows.
The downside to these methods is that today, you only get the access control if you’re accessing your data through one of the supported engines. You can have granular access through both Hive and regular MapReduce.
Because setting up Hadoop security is still somewhat complex, a recent trend has been to try and get all of your security at the perimeter. This isn’t necessarily a new idea. Folks have been using Hue for a number of years to provide limited access at the boundary. What’s different is that more and more users are looking for perimeter controlled API access, not just a user-facing GUI. The most popular dedicated project in this area is Apache Knox. Knox is nice in that it lets you grant access to select users through a proxy service. The downside is that Knox implements it’s own REST APIs so you can always just plug in a standard Hadoop client and point it to Knox. The other limit of perimeter security is that if you’re allowed to upload jar files and submit jobs, then you still need all of the other security features enabled to prevent jobs from running amok.
Let me take a minute to talk about who you trust in a traditional Hadoop deployment. By default, all of that data in Hadoop is stored in the clear. That means that you have to trust the system administrators that have root access to the cluster to not go poking around in data they shouldn’t. You also trust that your network is secure and that malicious users can capture or sniff traffic that wasn’t meant for them. These assumptions are fine for a large number of users, but they don’t satisfy the most paranoid among us.
For these users, Hadoop supports encryption in a number of different places. Today, it is largely available for data and metadata that goes over the wire. You can encrypt Hadoop’s RPC protocol, the data block streaming protocol, and the MapReduce shuffle phase. This encryption is implemented with SASL for RPC and block streaming which limits the encryption codec options to a certain degree. Shuffle encryption is implemented with SSL and so it supports what ever cipher suites are available in Java’s SSL implementation.
Over the wire is great and all, but what about disk-based encryption? Today, Hadoop doesn’t support native disk-level encryption. Historically, users that needed to encrypt their data on disk used third party tools that would encrypt the data volumes. HDFS-6134 is the work going on upstream to add native encryption at rest to HDFS and should come out in a future release.
So, what does the future hold? Well, if you’ve been following along you see a number of themes that pushing Hadoop’s security boundaries. Hadoop clusters are getting larger and more diverse user bases. Increasingly deployments are no longer trusting the network or administrators that are running your cluster. Folks are accessing ever increasing volumes of data and with more and more diverse processing engines.
This generally brings back to granularity. HBase has already added cell-level security, will HDFS follow suite? Obviously it’s harder for a file system to protect objects that are contained with-in the files themselves, so more creative solutions will likely be necessary. HDFS is already improving the flexibility in protecting files and directories. Hadoop 2.4.0 added support for file system ACLs. This means you’re no longer limited by the POSIX-style permissions that have been around since Hadoop 0.16.
I’m personally excited for the future of Sentry. Sentry already supports more projects than any of the other fine grain access control solutions out there. You can expect to see it integrate with more and more processing engines in the future. There is already extensive work being done to add a DB-based backend to Sentry (SENTRY-37) which will be more sustainable than the configuration file used today. That work will also enable the ability to use SQL-based GRANT/REVOKE commands to update permissions and roles.
In keeping with the granularity theme, there is also a proposal to add true column-level access control to Sentry. This will eliminate the need to create views in order to simulate column-level access.
I already spoke a bit about what encryption features are available today and what’s in the pipeline. A big proponent of the additional encryption technologies is Intel through it’s Project Rhino initiative. Through Rhino, a number of encryption and other security related enhancements have been proposed and worked in upstream Hadoop. One of the biggest benefits to the work being completed under Rhino is support for accelerated encryption codecs when running on certain Intel chips. This will enable large scale deployments of encryption without huge performance penalties.
I want to leave you with this final thought. All of the work that has been done to adding security to Hadoop and all of the work that’s coming down the pipeline is there to enhance your ability to share data. This is the key idea behind Hadoop. You can argue about which processing engine to use or even which file system, but Hadoop as an idea is bigger than all of them. Hadoop, for the first time, gives us a platform where you can store all of your data, access it in whatever way makes the most sense, and can do so while sharing huge clusters among thousands of users. Security is really at the heart of this. All of the great capabilities of Hadoop are for naught if it won’t let you securely share and access your data.