Putting to your Robots to Work V1.1


Published on

Updated version of the presentation given at AppSec USA 2012.

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • My name is Alex Smolen, this is Neil Matatall and this is Justin Collins. We're on Twitter's Product Security team and today we're going to talk to you about security automation at Twitter.
  • We want to talk about the future, and the direction that we're taking as a team to solve tomorrow's application security challenges. We're going to show some cool tech we've been working on, and talk about what we do, what we don't do, and why.
  • But before we do that, I want to walk through a little bit of Twitter's seven year history.
  • Twitter isn't a particularly old company, but it is a company that's changed a lot. This is our first logo from 2006.
  • Twitter grew up very quickly and publicly, and had a lot of infrastructure challenges. [Fail Whale?] (Current public number is 400 million monthly uniques to twitter.com)
  • One of those challenges was security. Of the several high profile account compromises at Twitter, this is perhaps the most notorious, where an attacker was able to compromise the president's Twitter account through an exposed administrative interface. This one got some very serious recognition.
  • From our pals at the FTC, who formally ordered an effective information security program at Twitter for the next 20 years.
  • We all joined the team after the FTC order. Our first challenge was dealing with a large and rapidly changing code base that was under constant attack. A simple XSS attack could lead to an XSS worm, which was a big problem from the FTC's perspective, and from ours.
  • With the help of whitehats, we tracked down and fixed a lot of these bugs.
  • Whereas working security at Twitter used to involve a lot of emergencies, we've reached a point where deployed vulnerabilities are much rarer, and it's given us an opportunity to think about what we should be doing. As part of a growing engineering team and a proliferating code base, we've started to think more strategically about how to be more efficient. So during our last hack week, our team built the 1.0 or our security automation framework.
  • Before we started coding, we've wanted to be able to describe a sort of worldview that we have, as a team, around security tools and automation, and use that drive what we built. As this audience a probably know, there's a lot of tools, methodologies, and activities related to application security. Our philosophy, and the tools we've built or integrated to support it, can really be distilled to a few principles.
  • The first is that we believe writing secure code is not just a technical challenge, but also a social one, and tools should be built based on supporting and enhancing existing social processes. Unless it's one person writing, analyzing, and shipping code, then communicating about vulnerabilities is just as a important as finding them. And effective communication is really hard. We're not talking about emailing a huge report of maybe bugs to a project manager. We're talking about delivering all of the necessary information to diagnose and fix a vulnerability in a simple and user-centered view.
  • The next principle is about finding and fixing things as quickly as possible. It's not a new idea, but as a guiding principle it leads you to be ruthless about bottlenecks, latencies, and root causes.
  • For a while, we were dealing with the same types of bug over and over and over. Once, while on call, I had a group of people decide to get themselves on the whitehat page by finding XSS in all of the sites of companies we had acquired. Let's just say I didn't get a lot of sleep that weekend. We've now introduced much more comprehensive security reviews for acquired companies. In our experience, the best predictor of the next bug is the last bug. So that's where we focus our effort.
  • There's a lot of ways to find security problems, and you get diminishing returns from each. We have tools that live on our servers, tools that live outside our server, tools that live in our users browsers, all meant to catch different types of issues.
  • Security automation results aren't entirely accurate. We want the fantastic engineers we work with to trust us, and so we want to make sure that they have a voice in the process.
  • Most people want to do the right thing. We want to make it easy for them.
  • We shouldn't be doing anything that doesn't require creativity or judgment.
  • While we've had some success with third party analysis and management tools, we've found that it's typically better to build our own. We know what we need to look for, and we know how our organization works. By doing only the things that are applicable to our technology, culture, and workflow, we waste less time overall.
  • So we try to follow these philosophies when we approach using and implementing tools. Automating security does not just mean using automated tools for specific tasks.
  • We have these manual tasks we need to perform as part of our security program. We need to review code as it is developed. We need to do penetration testing by poking around on our websites. And then we rely on whitehats to find problems and hopefully report them to us, rather than letting the world know. Many of our security tasks can be partially replaced with automated tools.
  • For example, we can use static analysis to check for common coding problems, dynamic analysis for obvious problems on websites, and maybe CSP to get XSS reports to us sooner
  • But the workflow is still manual! Someone from the security team runs the tools, waits for results, then needs to determine the validity of reports, and then work to get fixes in place. Like Alex said, we need to replace the dumb work with automation.
  • And we have to do it over and over for new code and new projects. Even using tools, we are still operating in a manual workflow.
  • We need to put our robots to work! Replace the manual workflow with one that runs the tools for you, then only requires your attention when a problem is found. For static analysis, we want tools to be run automatically when code is committed. For dynamic tools, we want them to always be crawling our sites and looking for problems. The reports from these tools should go to a central location, which only alerts us when potential problems are found.
  • Once we have an automated workflow, we are happier and more relaxed. Fewer repetitive tasks means we can focus more attention on jobs that require creativity and deeper investigation.
  • Our original approach to solving the automation problem centered around Jenkins CI, an open source continuous integration server. This worked okay at first for running static analysis tools, but we needed a solution that would work with dynamic tools and we found the notification system did not fit our workflow.
  • So we have been working on our automation solution called SADB, a central service to handle all of our automated tools and reports. This serves as not only a dashboard for the security team, but also handles notifying and informing developers. [old notes below] Originally was static analysis dashboard (S A D B) Incorporated more results, loved calling it "SADB" so the name stuck Rails. Most people use brakeman with jenkins. This has a few issues. One such issue that was a blocker to us was the scenario where a line change would trigger a new and fixed warning alert every time Some alert on delta to help reduce the noise but that potentially hides vulnerabilities. Came out of the need to manage all of the various data points we have around code security. Similar to threadfix. Also, gave us a high level overview that Jenkins could give us. Relied heavily on jenkins scraped images Posting the posting results to sadb Also received data from scans during deploys Completely informational Failed to have any meaning to developers Wasn't meant to be user facing. Really just to help the team manage issues. We wanted to manage phantom-gang findings as well so we started posting results to sadb and used ActiveAdmin to give us a simple GUI to create Jira issues and a more digestible format for all of our findings. While developers would see the tickets as a result of sadb management, we had made no progress in making it useful for someone outside our team. Then came #hackweek. We thought about it from the developers standpoint. Came up with a few stories. Built a wicked awesome mesos-based continuous integration-like system.
  • SADB is our central database of reports, and can handle input from a variety of sources, which we will describe a little later. This includes static analysis reports from Brakeman, dynamic analysis reports from Phantom Gang, CSP reports directly from browsers, and our internal code review tracking. SADB can then send out notifications as needed to developers and the security team. Because this is a custom tool, we can more easily adapt it to take input from anywhere, and make sure the logic matches what we need for our workflow.
  • Brakeman is an open source, static analysis security tool for Ruby on Rails applications. "Zero configuration" Detects the usual suspects: SQLi, XSS, command injection, open redirects. Also Rails-specific issues: mass assignment, model validation, default routes, CVEs. And more.
  • Brakeman can be run anytime! After deploys (but why? too late.) As part of QA or part of code reviews Integrate into CI (Jenkins or custom) As a commit hook? As part of tests - rake brakeman:run What about as code is saved - with file system monitoring
  • Developer pushes code to central repo A mesos job pulls latest code, scans each commit Each scan is reported to SADB Notifications sent on new/fixed warnings
  • Because SADB collects reports per commit, we are able to track detailed history and trends for each application.
  • The large drops in warnings date from when Brakeman started getting used more heavily at Twitter.
  • SADB allows developers or security people to drill down into reports
  • The warning details are designed first for developers following an email link, then for the security team
  • Because we scan each commit, we are able to pinpoint just how long a warning has been around.
  • We provide a link directly to the file and line number.
  • The warning also includes a snippet of the code that raised the warning, as it is interpreted by Brakeman. <neil>: you mention it's the code "as brakeman sees it" but I think that needs some more explanation. It's awesome that it resolves the variable name to it's assignments (and has even more logic to condense this - 1 + 1 + 1 -> 3)
  • We included inline documentation about the potential vulnerability as it relates to Rails.
  • Each warning has a button that allows developers to directly tell the security team that the warning is bogus.
  • I'm thinking video here will show off the SADB/Brakeman flow
  • We want to integrate this into our deployment more tightly so that people can't ship code without fixing warnings. We're also working on static analysis tools to cover JS and Scala, especially for our internal web frameworks. <maybe?>SADB could trigger decider changes to disable a given feature.
  • The birth of phantom-gang is a tool that compliments our static analysis and manual efforts by scanning live web pages. We'll talk about a few issues we're seeing over and over again.
  • These are often issues that might go undetected unless an attentive person reports such information, but are very easy to detect on a live web page. In order to hunt these down with some tenacity, we need to created a tool to look for them. Mixed content can cause a variety of issues, the main one being that you lose the guarantee that the content is coming from the who you expect it to come from. Hackers can inject content, sniff cookies, etc. CSP reports can help with mixed content, but only if a user with a CSP enabled browser visits the page.
  • In addition to traditional dynamic scanning (xss, sqli), we wanted to employ a tool tailored to the problems we are seeing rather than what the industry is focusing on. Dynamic analysis tool for finding common issues that can be detected easily in a browser environment. This is an "always on" tool that is constantly crawling our properties. For common classes of mistakes, we create phantom-gang rules that eventually might turn into a regression framework.
  • Phantom-gang is a collection of node processes that spin up Phantom-JS instances (hence the name). PhantomJS is a headless webkit browser that is driven by javascript. This allows to simulate what the user would experience with full javascript support. Given a browser environment, it's really easy to test for the problems previously listed. Phantom-gang sends reports of what it finds to SADB. The management of said issues is not automatic like brakeman warnings, I'll get into that more in a bit. From SADB, we can create a Jira (our issue tracking software) ticket for the owners to fix.
  • - Servicify - allow people to request a scan for a given site/page - might be springboard of JS analysis, it's useful because we have resolved JS dependencies and we have a full javascript which is handled by asset packaging . - Could incorporate etsy-style testing - Open sourcing
  • Content security policy defines what can "run" on a page and any deviation creates an alert. And Twitter was an early adopter. We saw that this could not only potentially protect our users, but give a large number of data points as to what the user is experiencing. We have used CSP to help detect XSS and mixed-content by leveraging the reports sent to us by the users' browsers. This compliments the static and dynamic analysis provided by brakeman and phantom-gang in a unique way as we are receiving information from the user. We send the CSP reports to a central scribe host (describe: massively scalable endpoint to collect and aggregate large amounts of data) which writes to hadoop file system which we can run "big data" reports against using pig/scalding. We send this information to SADB where we can search and sort more easily.
  • Mixed content Static gone dynamic Inline script when there isn’t supposed to be
  • Take your CSP reports and turn them into something actionable but tune down the noise. Initially we were getting all kinds of false positives from chrome-extensions, compromised systems, etc. Feeding into hadoop and Splunk See a lot of img-src violations with “http”? You likely have a mixed content warning See a lot of script-src violations? You could be under attack. Users on other browsers don't get protection
  • A report from one of our wonderful whitehat reporters gave us a drop of happiness when he said that a successful xss attempt had been thwarted by CSP. TRANSITION: we took stock of what headers were implemented on our properties, and we were not satisfied. They were applied inconsistently and a by a variety of one-off methods.
  • script tag on* events javascript: hrefs Even mention inline style
  • Mention github blog post There are a few, mostly well known, ways to solve this - data attributes, blocks of code parsed as json mention the application of the header
  • While this doesn't exactly fit in with the theme of a central place to see information, the application of a consistent CSP header lead to the creation of a library to apply the rest of the headers. HSTS ensures that a given page will only be loaded over SSL, which is handled by the browser. HSTS is unique from most headers as those concerned with performance and security on the same side: you save a round trip/redirect. HSTS basically tells a website to only serve a page over SSL once the header is set (usually for a long period of time). This helps mitigate SSLStrip and Firesheep attacks. This not only protects our users, but gives us justification to enforce SSL on previously non-SSL'd things. b/c we created a library to get them to use the headers
  • Twitter has had clickjacking problems in the past. While xfo does not solve all clickjacking issues, it does solve a very common case and is generally a very quick win that is easy to integrate.
  • Yeah, there are some IE specific headers too. I assume they are useful.
  • Given that the browsers give us some baked in security and they take a relatively small amount of effort to implement, why aren't they more common? It’s a non-intrusive, easily configured way of enxuring that all requests get the necessary headers applied. We created a gem for Rails applications, and we intend to apply the same logic to our other frameworks as well.
  • Talk about benefits: centrally configured/audited - pair with Decider abstract away differences in browser implementations, mimic unreleased features (firefox forwarding, default-src propegation, chrome-extension, etc)
  • A couple more small things we built. First, there's Threatdeck.
  • One of our teammates had built out a set of TweetDeck columns with terms like "Twitter XSS", "Twitter SSL", and "script alert". In this past, people had tweeted about vulnerabilities using these terms, which is not exactly responsible disclosure, but using these columns, he would find out about it quickly. We liked the idea so much we built out "ThreatDeck" which anyone in the company can monitor, and has a cool radar animated gif and ASCII art.
  • Finally, there's Roshambo, and this one's kind of funny.
  • In the past, people were constantly shipping code, and we simply didn't have the visibility we needed to review the important stuff. So we started using a mechanism to alert us if changes happen to critical code paths, which automatically adds us to a code review. The problem then became that we had a bunch of code reviews lined up, but sometimes they wouldn't get reviewed. Someone would have to manually collect them and review them... but who?
  • Our team staged a roshambo tournament every week, and the loser would have to collect and review all of the leftover code changes. And while this was great for team morale, we realized we could use automation to automatically collect the unreviewed changes and report them to SADB. We still have a roshambo tournament to determine who reviews them.
  • Putting to your Robots to Work V1.1

    1. 1. @salesforceApril 23, 2013Putting YourRobots to WorkSecurity Automation at Twitter
    2. 2. @salesforce April 2013@alsmola | @ndm | @presidentbeefThe future
    3. 3. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    4. 4. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    5. 5. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    6. 6. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    7. 7. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    8. 8. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    9. 9. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    10. 10. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    11. 11. @salesforce April 2013@alsmola | @ndm | @presidentbeefPhilosophicalGuidelinesGuidelines
    12. 12. @salesforce April 2013@alsmola | @ndm | @presidentbeefGet the right information to theright people
    13. 13. @salesforce April 2013@alsmola | @ndm | @presidentbeefFind bugs as quickly as possible
    14. 14. @salesforce April 2013@alsmola | @ndm | @presidentbeefDont repeat your mistakes
    15. 15. @salesforce April 2013@alsmola | @ndm | @presidentbeefAnalyze from many angles
    16. 16. @salesforce April 2013@alsmola | @ndm | @presidentbeefLet people prove you wrong
    17. 17. @salesforce April 2013@alsmola | @ndm | @presidentbeefHelp people help themselves
    18. 18. @salesforce April 2013@alsmola | @ndm | @presidentbeefAutomate dumb work
    19. 19. @salesforce April 2013@alsmola | @ndm | @presidentbeefKeep it tailored
    20. 20. @salesforce April 2013@alsmola | @ndm | @presidentbeefAutomating Security
    21. 21. @salesforce April 2013@alsmola | @ndm | @presidentbeefManual security tasksCode reviewExternal reportsPen testing
    22. 22. @salesforce April 2013@alsmola | @ndm | @presidentbeefAutomated security tasksCode reviewExternal reportsPen testingStatic analysis toolsDynamic analysis toolsCSP
    23. 23. @salesforce April 2013@alsmola | @ndm | @presidentbeefManual security workflowRun tool Wait forit...InterpretreportsFix stuff
    24. 24. @salesforce April 2013@alsmola | @ndm | @presidentbeefManual security workflowRun tool Wait forit...InterpretreportsFix stuffRepeat
    25. 25. @salesforce April 2013@alsmola | @ndm | @presidentbeefPut your robots to work!CodecommittedRun dynamictoolsRun staticanalysis toolsGatherreportsIssuenotificationsAutomate dumb work
    26. 26. @salesforce April 2013@alsmola | @ndm | @presidentbeefAfter automation
    27. 27. @salesforce April 2013@alsmola | @ndm | @presidentbeefJenkins CI
    28. 28. @salesforce April 2013@alsmola | @ndm | @presidentbeefSecurity Automation Dashboard (SADB)
    29. 29. @salesforce April 2013@alsmola | @ndm | @presidentbeefCSPBrakemanThreatDeckPhantom GangRoshamboEmaildevelopersEmailsecurity
    30. 30. @salesforce April 2013@alsmola | @ndm | @presidentbeefCSPBrakemanThreatDeckPhantom GangRoshamboEmaildevelopersEmailsecurity
    31. 31. @salesforce April 2013@alsmola | @ndm | @presidentbeefOpen SourceStatic analysis for Ruby on Railsbrakemanscanner.orgbrakemanscanner.org
    32. 32. @salesforce April 2013@alsmola | @ndm | @presidentbeefWriteCodeRunTestsCommitCodePush toCICodeReviewQA DeployCodeBrakeman can run anytimeSaveCodeFind bugs as quickly aspossible
    33. 33. @salesforce April 2013@alsmola | @ndm | @presidentbeefDeveloperMesos +BrakemanCodeRepository SADBPush CodePull CodeSendReportSendEmailGet the right information tothe right people
    34. 34. @salesforce April 2013@alsmola | @ndm | @presidentbeefHistorical trends2007 2008 2009 2010 2011 2012 2013
    35. 35. @salesforce April 2013@alsmola | @ndm | @presidentbeefHistorical trends Twitter starts using Brakeman2007 2008 2009 2010 2011 2012 2013
    36. 36. @salesforce April 2013@alsmola | @ndm | @presidentbeefReports
    37. 37. @salesforce April 2013@alsmola | @ndm | @presidentbeefAnatomy of a warningWarning message
    38. 38. @salesforce April 2013@alsmola | @ndm | @presidentbeefAnatomy of a warningWhen warning first reported
    39. 39. @salesforce April 2013@alsmola | @ndm | @presidentbeefAnatomy of a warningCode location, link to repo
    40. 40. @salesforce April 2013@alsmola | @ndm | @presidentbeefAnatomy of a warningCode snippet
    41. 41. @salesforce April 2013@alsmola | @ndm | @presidentbeefAnatomy of a warningRails-specific informationHelp people helpthemselves
    42. 42. @salesforce April 2013@alsmola | @ndm | @presidentbeefAnatomy of a warningFalse positive report buttonLet people prove youwrong
    43. 43. @salesforce April 2013@alsmola | @ndm | @presidentbeefQuickTime™ and aH.264 decompressorare needed to see this picture.
    44. 44. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    45. 45. @salesforce April 2013@alsmola | @ndm | @presidentbeefCSPBrakemanThreatDeckPhantom GangRoshamboEmaildevelopersEmailsecurity
    46. 46. @salesforce April 2013@alsmola | @ndm | @presidentbeefMixed-contentSensitive forms posting over HTTPOld, vulnerable versions of jQueryForms without authenticity tokensWhat does it look for?
    47. 47. @salesforce April 2013@alsmola | @ndm | @presidentbeefDont repeat your mistakes
    48. 48. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    49. 49. @salesforce April 2013@alsmola | @ndm | @presidentbeefPhantom-gang 2.0
    50. 50. @salesforce April 2013@alsmola | @ndm | @presidentbeefCSPBrakemanThreatDeckPhantom GangRoshamboEmaildevelopersEmailsecurity
    51. 51. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    52. 52. @salesforce April 2013@alsmola | @ndm | @presidentbeefDetecting XSSAnalyze from many angles
    53. 53. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    54. 54. @salesforce April 2013@alsmola | @ndm | @presidentbeefQuickTime™ and aH.264 decompressorare needed to see this picture.
    55. 55. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    56. 56. @salesforce April 2013@alsmola | @ndm | @presidentbeefImplementing CSP is not trivial
    57. 57. @salesforce April 2013@alsmola | @ndm | @presidentbeefHTTP Strict Transport Security
    58. 58. @salesforce April 2013@alsmola | @ndm | @presidentbeefX-Frame-Options
    59. 59. @salesforce April 2013@alsmola | @ndm | @presidentbeefX-Xss-ProtectionX-Content-Type-OptionsX-Xss-Protection
    60. 60. @salesforce April 2013@alsmola | @ndm | @presidentbeef
    61. 61. @salesforce April 2013@alsmola | @ndm | @presidentbeefSecureHeadersAutomate dumb work
    62. 62. @salesforce April 2013@alsmola | @ndm | @presidentbeefHeader status page
    63. 63. @salesforce April 2013@alsmola | @ndm | @presidentbeefCSPBrakemanThreatDeckPhantom GangRoshamboEmaildevelopersEmailsecurity
    64. 64. @salesforce April 2013@alsmola | @ndm | @presidentbeefThreatDeck
    65. 65. @salesforce April 2013@alsmola | @ndm | @presidentbeefCSPBrakemanThreatDeckPhantom GangRoshamboEmaildevelopersEmailsecurity
    66. 66. @salesforce April 2013@alsmola | @ndm | @presidentbeefReview all the things
    67. 67. @salesforce April 2013@alsmola | @ndm | @presidentbeefRo-Sham-Bo
    68. 68. @salesforce April 2013@alsmola | @ndm | @presidentbeefRo-Sham-Bo
    69. 69. @salesforce April 2013@alsmola | @ndm | @presidentbeefRo-Sham-BoNeeds to be reviewedAutomate dumb work
    70. 70. @salesforce April 2013@alsmola | @ndm | @presidentbeefOur journey thus farManual tasksLow visibilityLate problem discoveryAutomated tasksTrends and reportsAutomatic notifications
    71. 71. @salesforce April 2013@alsmola | @ndm | @presidentbeefTools in this presentation