The Future of Apache Hadoop Security


Published on

As the volume of data and number of applications moving to Apache Hadoop has increased, so has the need to secure that data and those applications. In this presentation, we’ll take a brief look at where Hadoop security is today and then peer into the future to see where Hadoop security is headed. Along the way, we’ll visit new projects such as Apache Sentry (incubating) and Apache Knox (incubating) as well as initiatives such as Project Rhino. We’ll see how all of this activity is making good on the promise of Hadoop as the future of data management.

Published in: Technology, Business
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • You probably came to hear me talk about security, but security is boring. So instead, I’m going to talk about hungry hungry hippos. If you haven’t played hungry hungry hippos before, it’s a fairly simple game where four players compete to collect the most marbles. Now hungry hungry hippos is usually played with all white marbles. This makes sense because all players can collect all marbles. But I’m going to change the rules.

    Image source:
  • BOOM, multiple colors. That’s more like it. Now that I have all this great variety, I want to restrict which hippos can consume which colors of marbles. Why do I want this new rule? Well, it doesn’t matter because I’m the one giving the talk so I get to make the rules.

    Image source:
  • Although seriously, do you want any of these yahoos to be able to collect any color of marble? Take the guy in the green, I never trust a man with a beard. Now that we’ve established that we want to limit access to certain color marbles, let’s brainstorm a couple of different ways to implement this.

    Image source:
  • Lets start by sorting the marbles into groups with the same color. We can then control access to these groups of marbles by creating magical boundaries that only specific hippos can penetrate. This system works well, but it’s not very granular. We have to pre-group the marbles that each hippo can access into it’s own magic box.

    Image source: source:
  • If some magic is good, more magic is even better! I’d much rather have magic marbles and magic hippos. Instead of having to first put the same colored marbles into the same magic box, now I can mix all the marbles together. Thanks to magic, each hippo will only grab the colors that they’re allowed. Any other marbles will pass right through them. This saves me a lot of time and it also allows me to invite more players. Now anyone can play and they’ll only ever collect the right kind of marble.

    Image source:
  • This is convenient because I’m very lazy. I’d hate to have to pre-sort things. It’s much easier for me to grab a handful of marbles and just throw them on the board and let the magic sort it all out.

    Image source:
  • At this point you may be asking what on earth does any of this have to do with Hadoop? Well, you might be asking that assuming you didn’t just walk out while I was up here rambling about hippos, marbles, and magic. If you think about how the usage of Hadoop has evolved, it started very much the same way as our hungry hungry hippos game. All of the marbles were the same color and any player could collect any marble. This was great when we were deploying Hadoop for a small set of users and we trusted every user with all of the data. But when something is useful, it inevitably leads to more adoption.

    Image source:
  • As more and more people show up to use our cluster, we have to think more and more carefully about who has access to what data. Before Hadoop had strong security controls, they implemented advisory authorization at the file and directory level. I say advisory because while permissions existed, Hadoop initially didn’t require that you prove who you say you are. This helped you prevent mistakes, but didn’t stop malicious users.

    Image source:
  • Hadoop solved this problem by adding support for Kerberos-based authentication. Now each user was given strong credentials that they could use to gain access to the system. Kerberos has become so synonymous with Hadoop security that 90% of the time if someone says they configured or enabled hadoop security, they’re probably talking about turning on Kerberos authentication.

    Image source:
  • This is great. You could now check that each user was who they said they were. You still have a bit of a problem in that permissions only exist at the file level. That means that if I want control access to particular types of data, I have to merge all of the protected data into their own files and set permissions accordingly. This is especially annoying if I’m using a query language like Impala, Hive or Pig. I’m now trying access tables of records but I have to manage my security controls with very blunt tools.

    Image source:
  • Before I talk about how we’ve made progress on the granularity problem, I want to take a minute to talk about how Hadoop, and in particular MapReduce, implements process-based isolation. Before Hadoop had security, every job was executed as the hadoop user. This meant that even though you accessed files by default using the user identity that submitted the job, there was nothing that prevented you from peaking over your shoulder and looking at some of the output from another job since all of the intermediate data was protected by OS permissions and all jobs ran as the same OS user. Hadoop solved this problem by adding the ability to su to the user that submitted the job before executing the job process. This is very useful from a security perspective, but I bring it up because it’s something that often trips up new administrators that are deploying Hadoop for the first time.

    TLDR; you must provision user accounts on every node in your cluster for any user that can run a MapReduce job. This is often done using LDAP or Active Directory so you don’t have to manage all the accounts by hand.

    Image source:
  • At it’s core, Hadoop is a system for executing arbitrary code over arbitrary data. Let me say that one more time. Arbitrary code running over arbitrary data. This is why Hadoop security is tougher than most other systems. The system starts with the ability to just run random code, you need to set up multiple barriers of protection before you have a fully secured system.
  • Why do we care about controlling data access with finer granularity? It all comes down to multitenancy. It’s cool to have a 100 node Hadoop cluster that serves all of the users in your department, but it’s even cooler to have a 1000 node Hadoop cluster that serves all of the users in your company. Because we want to share these large clusters to get good economies of scale, we need to come up with more creative ways to control access to data. Again. we could keep sorting data into files and directories and implementing all of our controls at the file-level, but that gets old fast.

    Image source:
  • One of the first efforts towards adding fine grained access control was a project called Apache Accumulo. Accumulo is similar to HBase in that it’s also based on Google’s BigTable design. One of the places Accumulo departed from the source material was to add an additional element to the key that provides a security label at the cell-level. This is very useful as it provides very fine-grain access, unfortunately the scanning speed of Accumulo and HBase isn’t as fast as scanning HDFS directly so they’re not as ideal for large batch workloads (hint: that’s what MapReduce was designed for).

    HBase initially added security at the table and column level, but they saw how much fun Accumulo was having that they added a cell-level option in the latest release.

    Image source:
  • Not wanting to be left out of the party, the community has done some work to prove fine-grain access control to data stored in HDFS. The two most popular ways of doing that today is with Apache Sentry (Incubating) and Apache Hive’s new, next generation authorization features . Sentry works by plugging into existing projects and adding RBAC from the outside. This is nice because it gives you a common way of controlling access to data across the different file format and processing engines. Today, Sentry supports Hive, Impala and Apache Solr. Sentry only provides access control down to the view level, but you can simulate column and even row-level access by creating views that expose a subset of columns or that filter rows.

    The downside to these methods is that today, you only get the access control if you’re accessing your data through one of the supported engines. You can’t have granular access through both Hive and regular MapReduce.

    Image source:
  • Because setting up Hadoop security is still somewhat complex, a recent trend has been to try and get all of your security at the perimeter. This isn’t necessarily a new idea. Folks have been using Hue for a number of years to provide limited access at the boundary. What’s different is that more and more users are looking for perimeter controlled API access, not just a user-facing GUI. The most popular dedicated project in this area is Apache Knox. Knox is nice in that it lets you grant access to select users through a proxy service. The downside is that Knox implements it’s own REST APIs so you can’t just take a standard Hadoop client and point it at Knox. The other limit of perimeter security is that if you’re allowed to upload jar files and submit jobs, then you still need all of the other security features enabled to prevent jobs from running amok.

    Image source:
  • This brings me to the topic of trust. By default, all of that data in Hadoop is stored in the clear. That means that you have to trust the system administrators that have root access to the cluster to not go poking around in data they shouldn’t. You also trust that your network is secure and that malicious users can capture or sniff traffic that wasn’t meant for them. These assumptions are fine for a large number of users, but they don’t satisfy the most paranoid among us.

    Image source:
  • For these users, Hadoop supports encryption in a number of different places. Today, it is largely available for data and metadata that goes over the wire. You can encrypt Hadoop’s RPC protocol, the data block streaming protocol, and the MapReduce shuffle. This encryption is implemented with SASL for RPC and block streaming which limits the encryption codec options to a certain degree. Shuffle encryption is implemented with SSL and so it supports what ever cipher suites are available in Java’s SSL implementation.

    Image source:
  • Over the wire is great and all, but what about disk-based encryption? Today, Hadoop doesn’t support native disk-level encryption. Historically, users that needed to encrypt their data on disk used third party tools that would encrypt the data volumes. HDFS-6134 is ongoing upstream to add native encryption at rest to HDFS and should come out in a future release. You may have also seen the announcement that Cloudera has bought Gazzang and we’re supporting their Hadoop encryption solution on our stack.

    When HDFS-6134 is merged in, you’ll be able to do block-level encryption on HDFS for a pre-specified list of files and directories. The coolest part is the the key management is pluggable so you’ll be able to use whichever system best meets your needs.
    Pluggable, scalable encryption for all your big data needs. Yes please.

    Image source:
  • So, what does the future hold? Well, if you’ve been following along you see a number of themes that pushing Hadoop’s security boundaries.
    Hadoop clusters are getting larger and more diverse user bases.
    Increasingly, deployments are no longer trusting the network or administrators that are running your cluster.
    Folks are accessing ever increasing volumes of data and with more and more diverse processing engines.

    Image source:
  • This brings us back to granularity. HBase has already added cell-level security, will HDFS follow suite? Obviously it’s harder for a file system to protect objects that are contained with-in the files themselves, so more creative solutions will likely be necessary. HDFS is already increasing the flexibility for protecting files and directories. Hadoop 2.4.0 added support for file system access control lists (ACLs). This means you’re no longer limited by the POSIX-style permissions that have been around since Hadoop 0.16.

    Image source:
  • I’m personally excited for the future of Sentry. Sentry already supports more projects than any of the other fine grain access control solutions out there. You can expect to see it integrate with more and more processing engines in the future. There is already extensive work being done to add a DB-based backend to Sentry (SENTRY-37) which will be more sustainable than the configuration file used today. That work will also enable the ability to use SQL-based GRANT/REVOKE commands to update permissions and roles.

    In keeping with the granularity theme, there is also a proposal to add true column-level access control to Sentry. This will eliminate the need to create views in order to simulate column-level access.

    But what I’m most excited about are plans to add a Sentry Record Service on top of HDFS. This would abstract away the access to files on the file system with a distributed services for accessing records. This service will be accessible from any processing engine and any level of security, including cell-level, could be implemented. The design work is still early and subject to change, but APIs are planned to be fully layered so you could add new security operations like transparent encryption directly into the service.

    This is going to be huge.

    Image source:
  • I already spoke a bit about what encryption features are available today and what’s in the pipeline. A big proponent of the additional encryption technologies is Intel through it’s Project Rhino initiative. Through Rhino, a number of encryption and other security related enhancements have been proposed and worked in upstream Hadoop. One of the biggest benefits to the work being completed under Rhino is support for accelerated encryption codecs when running on certain Intel chips. This will enable large scale deployments of encryption without huge performance penalties.

    For it’s part, Cloudera is doubling down on Rhino and investing heavily in the security of Apache Hadoop.

    Image source:
  • I want to leave you with this final thought. All of the work that has been done to add security to Hadoop and all of the work that’s coming down the pipeline is there to enhance your ability to share data. This is the key idea behind Hadoop. You can argue about which processing engine to use or even which file system, but Hadoop as an idea is bigger than all of them. Hadoop, for the first time, gives us a platform where you can store all of your data, access it in whatever way makes the most sense, and can do so while sharing huge clusters among thousands of users. Security is really at the heart of this. All of the great capabilities of Hadoop are for naught if it won’t let you securely share and access your data.

    Image source:
  • So, play a game and have some fun.

    Image source:
  • The Future of Apache Hadoop Security

    1. 1. 1 The Future of Apache Hadoop Security Joey Echeverria, Chief Architect of Public Sector ©2014 Cloudera, Inc. All rights reserved.
    2. 2. ©2014 Cloudera, Inc. All rights reserved.13 Hadoop | had(y)ōōp | noun a system for executing arbitrary binaries over arbitrary, often large datasets: we used Hadoop to count an exabyte of words.
    3. 3. ©2014 Cloudera, Inc. All rights reserved.27 Joey Echeverria @fwiffo
    4. 4. 28