Yet Another Dan Kaminsky Talk (Black Ops 2014)
Upcoming SlideShare
Loading in...5

Yet Another Dan Kaminsky Talk (Black Ops 2014)



2014 Defcon Talk

2014 Defcon Talk



Total Views
Views on SlideShare
Embed Views



3 Embeds 153 144 6 3



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

Yet Another Dan Kaminsky Talk (Black Ops 2014) Yet Another Dan Kaminsky Talk (Black Ops 2014) Presentation Transcript

  • Yet Another Dan Kaminsky Talk (About much more than RNG) Dan Kaminsky Chief Scientist White Ops Special Guest: Ryan Castellucci, Security Engineer, White Ops
  • Back To Defcon! • This is my thirteenth rodeo • Why? • It’s fun tech, but why this tech, why this place, why this way? • Fundamentally subversive, even for Defcon • Defcon loves to show what can be broken • I love to show what’s possible • We can scan the whole Internet, in seconds! • We can bust violators of Network Neutrality! • We can play video of Darth Vader doing Riverdance over DNS! • Among other tricks involving DNS • Like DNS rebinding and hacking home routers • These are not actually the same thing • So why are we here?
  • Subjects For Today • Why do we keep getting owned by bad Random Number Generation? • Users are expected to recognize and maybe even repeat complex patterns, and they’re failing. Can we do better? • What do we do about the infinite supply of Browser 0day? • What do we do about DDoS, which is actually growing substantially? • So, just how much did the NSA shit the bed? • But first…I want to talk about why all this stuff we do is useful. By discussing hard drives.
  • A Scenario • An attacker has compromised a system, and put something malicious on the hard drive. How bad could it be? • Added an autolaunching daemon? • Replaced core OS files? • Put a virus in the MBR? • Can you just wipe out the drive and be done with this?
  • What Is The Difference Between… A Hard Drive A Brick
  • Given Sufficient Research, None A Hard Drive == A Brick ==
  • Theory Vs. Reality: What Is A Hard Drive
  • Xzibit’s Iron Law Of Computer Architecture • Yo Dawg, I heard you liked computers, so I put a computer in your computer so you could compute while you compute • This is basically how computers work – hard work is offloaded to hardware, which is really just another computer that has a fast way to influence you • Fast == trusted == insecure • Everything is generic now – the iPhone has seven ARM chips • The biggest lie about your computer is that it’s just one computer • That hard drive is just another computer with direct access to your system memory via specially designed protocols • Doubt me?
  • From An Old Project with Travis Goodspeed: Lets Get A Shell On This Here Seagate!
  • Oh Shell, You’re even self documenting!
  • Documents commands, their locations, their arguments…
  • Get some internal drive metadata…
  • Learn about the disc you know…
  • …and the disc you don’t…
  • Can Even Read It Out…
  • Nice Test Code!
  • Don’t think that you need that serial link…
  • …or that I was kidding about making bricks • Fate: Platters exposed, fingerprinted. Firmware dumped as 001.rom. • Fate: Bricked by Dan when he wrote to the buffer. Probably repairable. • Fate: Bricked by dan with HDPARM, then sawed apart by Travis for post-mortem firmware analysis. • Fate: Semi-bricked by Dan by raising the serial port rate to 625000. Repairable.
  • The Paper That Eventually Followed • Implementation and Implications of a Steath Hard-Drive Backdoor Jonas Zoldach, Anil Kurmus, Davide Balzarotti, Erik-Oliver Blass, Aurelien Francillon, Travis Goodspeed, Moitrayee Gupta, and Ioannis Koltsidas • I’m an unindicted co-coconspirator  • 10 months of solid reverse engineering to create the malicious firmware payload • Not at all the first time this has been done, but maybe the first time publicly/academically
  • The Significance of the Research: S^X (Storage XOR Execution) • Highly desirable property for secure clouds • “It’s not about ownage, it’s about continued ownage. It’s about coming back in a year and having the door still open for you” • Requires exploitation and persisted storage • If your storage doesn’t parse, it can’t be exploited • If your execution doesn’t store, there’s nowhere to persist • You can’t start thinking about S^X if you think you already have it • There are ways of building S^X out of existing gear – but nobody thinks you have to • Nobody thinks you have to have exclusive hosts on cloud providers either  • Thus, the value of hacker engineering • No delusions • Operation from first principles • The best thing hackers break are assumptions • We are needed.
  • I Believe In The Internet
  • The Internet Is Not Just For Us • I believe in TexasGirly1979 having six and a half MILLION people loving her damn cat • Without Bob Saget, a laugh track, and a baseball to the testicles as followup • I believe in responding to being told your music sounds like “rats being strangled” by becoming a dubstep violinist and getting a half billion hits on YouTube • Remixing Skyrim and Zelda and Halo may have something to do with my opinion • I believe in email over Interoffice memos, Skype over (very) plain old telephone service, online banking over waiting for a bank teller • And I believe we could lose it all, and there are some who might even prefer that
  • The Thing To Remember • The Internet was not the first time we tried to create the Internet • It was like the 8th • Prodigy tried • Minitel tried • America Online really, really tried • Spent a billion dollars on modem lines! • (They actually still make $160M/yr off that.) • The Internet was free • Not just free as in murka • It was free as in no rent – all of the above wanted to charge transaction fees • The Internet just wanted a few pennies a year for a domain name • The free Internet has disrupted a heck of a lot • Don’t think there are those who wouldn’t like to disrupt back
  • The Challenge • We know what’s really possible • We know that breaking everything, is really possible • We don’t know that fixing everything, is really possible… • …but technically, we don’t actually know fixing everything is impossible • Technically correct is the best kind of correct • So why do I do this, and why do I do this here? • Because here, I see openness to possibility • Including the possibility the Internet might turn into AOL • I see the sheer joy of understanding the actual mechanisms at play • I see the skepticism that allows BS – especially security BS – to be discussed and recognized
  • Let’s Talk Random Number Generators • “The generation of random numbers is too important to be left to chance” • Many processes require a generator of unpredictable numbers • Last 1000 numbers don’t tell you the next number • Last 1000 numbers also don’t tell you the number 1001 times ago • That they require such a generator doesn’t mean they’re using such a generator
  • What’s happening to all these web frameworks • You actually don’t log in very frequently • Top 10 e-commerce platform: 7 logins a second, worldwide • Logging in gives you a token • If you’re lucky, it’s HMAC(who_you_are, key) • In the real world, it’s often Random() • Sure would suck if that wasn’t actually random…
  • Dominique Bongard’s (AKA @reversity) Beautiful WPS Attack From #Passwords14
  • Details • WPS Protocol assumes 128 bits of entropy in AES keys • Reality: 32 bit LFSR, 32 bit LFSR, 0 bit (aeskey is always 0) • 2**128: Big • 2**32: Not so big (~4B, remember your CPU does that many operations in seconds)
  • Why Do We Keep Getting Owned By Random Number Generators? • Wrong Answers • “Because we used SHA-1 instead of SHA-256” • Freaking MD4 would work • “Because we used HASH_DBRG instead of HMAC_DBRG” • “Because we used the same entropy for too long” • “Because we had only one pool for mixing entropy instead of thirty two” • There’s a thousand ways to build a CSPRNG (cryptographically secure pseudo random number generator) and all of them would fail less than what we’re doing • Even forking/cloning VMs doesn’t fail this frequently • Right Answers • 1) No entropy • 2) No CSPRNG by default
  • A Little Dab’ll Do Ya • CSPRNG’s expand a little bit of entropy (128 bits) into billions of gigabytes • Best attack would be to brute force the 2**128 possible initial seeds and find the one that matches the stream • Good luck with that…unless there’s actually not even 128 bits • There’s often actually not even 128 bits    • Turn on a machine, no keyboard, no mouse, no disk…no entropy. • 1/200 RSA keys on the Internet were badly generated, meaning actually probably 1/50 • Heninger attack only found all identical private keys, not similar keys • Private keys are just small chunks of randomness that ended up prime • Shouldn’t /dev/random or /dev/urandom have prevented this?
  • Perfection Is The Enemy Of The Good • Why we like kernel RNGs (/dev/random, /dev/urandom) • Kernel code gets events from hardware • General thinking is that event timing from foreign hardware is on a separate clock • All good entropy is a fast clocked system measuring a slow clocked system • Inter-keystroke timing from a human to the CPU nanosecond • There’s also only one kernel • If anyone gets entropy everyone gets entropy • Operating systems are supposed to provide services that are easy to screw up • Can get more complexity/correctness encapsulated in the OS rather than monkey-shoved into each language or application • What went wrong • /dev/[u]random still fails when there’s no events coming in from the hardware • No keyboard, no mouse, no disk == no events • Yes, it’d be nice if there was hardware entropy generators, but not happening
  • Solving The True Entropy Problem • You don’t have to be perfect every time • You just need to be better than the ONE NINE OF RELIABILITY the status quo has achieved • Be embarrassed • 1) Force synthesis of events • 2) Don’t necessarily be afraid to do this in userspace • Especially in a world of VMs, kernel is nothing but a single threaded process • What this means • Measure the real time clock (kilohertz at best) with the CPU (megahertz to gigahertz) • Possibly do some work with non-deterministic time (atomic memory writes) • Do standard whitening (Von Neumann – drop 00 and 11, 01 is a 0, 10 is a 1) • This is the Dakarand (Truerand) approach. It may fail in some very rare scenario. It will fail less than the status quo. • Security means it’s OK to fail less.
  • Solving The True Problem • Operating systems have /dev/urandom which is generally pretty good – but does anything use it? • JavaScript Math.Random(): no • Ruby Random.rand(): no • Java.util.Random(): no • PHP rand(): no • Glibc rand(): hellllllllllllllll no • Standard RNG is LFSR or Mersenne Twister – if you want security, you’ve got to hunt down some other API • And it’s always funky and different! Math.random(); var a = new Uint32Array(1); window.crypto.getRandomValues(a); • How well does insecure by default work anywhere else? • Pretty badly
  • The Obvious Question: Why not be secure by default? • Instead of requiring some special, magic, and consistently overcomplicated API for Secure Random™, why not just have all random calls not be crap? • Liburandy • Silently hijacks standard randomness functions in common languages, backs them with /dev/urandom (on Linux) • Would be CryptGenRandom on Windows • The OS already does this right, lets just hook dev intent to something that’s not busted • Initial support for JS(Node, Browsers), Ruby, LD_PRELOAD (arbitrary libc), PHP, Python. OpenJDK soon • Ryan Castellucci of White Ops helping to write this • Will eventually support some important runtime features • Entropy Metadata (runtime will be defining security rather than source code) • Fixed seed • High speed mode
  • What’s Wrong With Liburandy (Lets play chess against myself a bit) • Two General Objection: Seed and Speed • Seed: Useful for getting consistent test results (Random, but the same random set) • 99% of the time you see seeding, it’s C seeding time because people don’t want the same random results • Can set a special flag (environment variable) • Speed: urandom is slower than LFSR/MT • If you wanted it to be fast why are you writing in in ruby • This is actually why things have been left crappy – so as to not lose benchmarks 
  • The Headaches Of Speed • /dev/urandom is fast enough for almost all uses • That being said, it’s still a lot slower than LFSR/MT • Linux: Only 3.7MB/sec – that’s only 947K ints/sec (and that’s with CPU drain)
  • Should we use a userspace CSPRNG, just not a crappy one? • Seed with [u]random from time to time to still absorb whatever entropy it does get • Minimal/Fast CSPRNG: SipHash(Seed, Counter++); • Yes, this actually works • ~20x speed of /dev/urandom on Linux • Possible improvement: CLOCK_MONOTONIC • Ideally, we’d just be able to ask the CPU for some random bits whenever userspace wanted a few • That’s way too easy (and we’d probably never trust the output anyway) • But we can ask the CPU for time • Even forked, likelihood near zero of repeated return value • For VMs, you don’t get a surfaced event on clone, but you do get to plunge into the host to find out nanosecond time • Also can be done on a per query basis, which cleans up a bunch of buffering issues from reading off a file handle
  • My preferred CSPRNG • SipHash(Secret || Counter++ || Previous_Output || ShiftedTime) • ShiftedTime is CLOCK_MONOTONIC, shifted by some absolute amount so the attacker can’t just look up the global time and predict bits • Previous_Output means if ShiftedTime only has one shot to repeat • Not using this yet, because I’m an ornery member of Team Just Use Urandom • Let’s not debate userspace vs. kernelspace • We agree: DEATH TO LFSR • Also possible that other hashes besides SipHash are good • Blake • Skein (can actually be 2x speed of SipHash, due to larger output) • Maybe we make /dev/urandom fast on Linux? • Probably need an API to probe for metadata about urandom?
  • The Other Entropy Problem • d3b07384d113edec49eaa6238ad5ff00 • 1Ngju42mHfxJJDEKA15rHUum17rcDEf699 • A1$jh-89)&@4bc (oh wait, no: **************) •
  • STOP PUKING BITS (Images of 8 Bit Vomit)
  • The Problem At Hand • Computers need Humans to recognize and remember larger numbers of bits • Human face recognition only works to about 16 bits (1/65536) • Computers want 24-128 bits depending on context • Recognition: Is this the hash from before? • Repetition: What is your password? • Standard Bit Representations are terrible • Humans did not evolve to recognize hex, Base64, Base58, or…squiggly
  • Previous Work On Cryptomnemonics • Some previous work from 2006: Names • 512 Male Names (9 bits), 1024 Female Names (10 bits), 8192 Last Names (13 bits) == 32 bits per couple • Heteronormative, but surprisingly diverse! • Julio and Epifania Dezzutti Luther and Rolande Doornbos Manual and Twyla Imbesi Dirk and Cuc Kolopajlo Omar and Jeana Hymel • Important to show names every time a connection is made, not just on challenge • Would you remember a face you saw only the one time there was an auth challenge?
  • Can We Do Better? • Humans do have memory capacity, it’s just not for arbitrary bits • We remember objects, and specifically stories, more than we remember 1’s and 0’s • So if you can represent 1’s and 0’s as stories, humans will recognize and repeat them better • Also, you’ll be able to put a password in and have it spell check • Experiment: Use triads of adjective/noun/verbs, set up to be “maximally distant” from eachother • Encode them in a way that is order independent, i.e. it doesn’t matter if the dog or the cow is brown or orange • Ryan Castellucci has been implementing this, here’s how he did it and where we’re seeing this being immediately useful
  • Storybits v0.1 • Hex bits: 5e4dcc3b5aa765d61d83 (80 bits) • Story encoding (early): • macho acid answering rustic cable fetching stuffy jacket forbidding wacky riot opening • Story decoder autocorrects • node demo.js "wckay jacket forbidrustic fetch cableacid mache answering open riot$$" • 5e4dcc3b5aa765d61d83 • Words are misspelled, out of order, in different tenses, missing spaces, etc.
  • Details • Implementations in JavaScript and Python • Rough, but releasing soon • Two major components • Combinatic Encoder • Encodes information with symbols that may not repeat • BADFGC is legal, AABBCC is not • Because symbols are independent, order doesn’t matter • Word Mapper • Maps combinatorial numbers to words in part-of-speech wordlists • Adjectives, nouns, verbs • Instead of 321, 561, 312, it’s “rustic fetching acid” • Allows for strong correction from word variants to original numbers • Words are chosen to have maximum distance for maximum correctability
  • On The Selection Of Passwords • We have to get past l33tspeak as a security technology • We punt generation of passwords to users because we don’t want to “screw it up” and we imagine if the user does it, it’s at least their fault for failing • This scheme lets passwords be generated server side, that have a hope of being memorable • Passwords that can be corrected are passwords that are more easily remembered • Faster recovery from silly typos allows password stretching to be more aggressive • Don’t suffer the stretch delay because you missed a key – it autocorrects and still works
  • Major Use Case: Phidelius Approaches Are Going Mainstream • Phidelius: Trick from a previous talk that seeded RNG w/ a passphrase • Allowed passphrases for SSH, SSL certs, pretty much mapped words to asymmetric crypto • A little fragile because it was a /dev/[u]random hijack • These approaches are now popular for asymmetric key management • Brainwallet:”Word/phrase to a Bitcoin address” • Minilock: “Word/phrase to a PGP key” • Incredibly vulnerable to insufficient entropy in the word or phrase • Because the public key is public, and the whole world can try to crack it • Has been an expensive problem in Brainwallet • Tens of thousands of dollars worth of BTC have been taken
  • Squaring The Circle • Brainwallet and Minilock need human-memorable entropy • Actual security needs more entropy • Storybits increase memorable entropy, allow the entropy to come from the machine (which can generate it) and not the user (who clearly can’t) • Extensions • Split Mode – Allow entropy to be split between two words, one that’s easily remembered (but very slow to crack), another that’s kept on a hard drive or offline to allow fast transfer of the key to a new device • The user gets to transfer in seconds • The attacker has to try to crack for hours per guess • Breaks the ~20 bit limit on how much you can stretch a key
  • Two More Tricks (Not Necessarily Advisable) • Local Shrinking • Will always be easier to recognize 24-32 bits instead of 80-128 bits • If you only force the attacker to match 24 bits, he can brute force random keys until he finally gets one that matches just the 24 bits you’re looking for • Olde trick against PGP KeyIds – “DEADBEEF attack” • But what if the attacker doesn’t know which 24 bits to match? • If you have a local shrinking code – sort of a password for recognition – it’s hashed against the full size 128 bit hash and truncated secretly to 24 bits • The attacker doesn’t know the code for the truncation, so he has to match everything. You do know the code so you only have to match a little. • Local Stretching • Technically can have a “master password” that’s mixed with all your otherwise small passwords and shared across many sites • Whether the site uses scrypt or some other key stretcher, its stored password is guaranteed to be stretched outside of cracking range • Password Policy Survival: We have mutually exclusive requirements (symbols, no symbols, numbers, no numbers). Can infer server policy from supplied password, persist it post-stretch
  • Browser 0day: The Shoe Finally Dropped • IE had more attacks in 2013 than Java • How is this possible? MS has been working to secure IE for a decade! • IE is basically Windows: The Remix (feat. The Internet) • Take a local object model (COM), hand it to bad guys, wait • This ultimately applies to every browser, because the spec ends up encoding a COM-like implementation (there’s an interface description language / IDL). • Plugins were nice because they were across browsers and tended to be the thing outside the “sandboxes” but now the browsers themselves are the low hanging fruit • “My TOR deanon has a first name, it’s Firefox 0day” as the Grugq would sing
  • Use After Free: The Infinite Well Of Browser Vulns • Browsers are constantly allocating memory for objects of all sorts of interesting types • Objects can point to eachother • When nothing points to an object, then you can free/reuse that memory • Lots and lots of ways to point to an object • Ever screw it up, all hell breaks loose • Allocate a table (0x12341234 == Pointer to Table) • Free the table (0x12341234 == Available For Allocation) • Allocate an image (0x12341234 == Pointer to Image) • Access the image via its old context as a table (0x12341234 contains contents of Image, being accessed via handlers for Table) • “Use After Free” == 90%-95% of the “deep vulnerable” attack surface for the web
  • Google and Microsoft Have Solutions Firefox Doesn’t, And Is Suffering For It • Google Solution: Typed Heap • Chrome is allocating all objects of the same type, in the same “sub-heap” • Memory once used for a table, will never be used for an Image, because that’s going into the Image heap • Microsoft Solution: Nondeterministic Freeing • (Oversimplifying, they’re doing a lot of stuff) • They’re making it so you don’t quite know when the memory has been freed, and thus aren’t guaranteed a reliable or rapid attack • This is great. Attack the unique needs of an exploit – this is counterexploitation
  • My Trick: Iron Heap • (Thanks Rob Graham for the name!) • You can’t Use After Free if you don’t free! • This is too expensive if you do it with physical memory • But on 64 bit machines, we have a damn near infinite amount of virtual memory • The basic idea is to never reuse an allocation – the physical memory is reused but the handles aren’t • 0x1234123412341234, once freed, stops pointing to any physical memory • Exploit that
  • Iron Heap Efficiency • Efficiency • Finite number of active allocations allowed – you’re leveraging the page tables which don’t like to be that stressed • Can have a “hot list” of pages that are directly mapped, and a secondary list that’s “swapped in” on demand • How compressed RAM works • Can potentially add guard pages between allocations to prevent overflows • Overflows have actually gotten kinda rare • Forces 8K per alloc, which is a bit high, though the compressed backing store can be fully packed (no pointers to it are exposed in user space) • Implementation • Can be run in both userspace (with libsigsegv to catch the extra accesses) and in kernel (which is already handling this sort of swapping for, well, swap) • Don’t necessarily object to kernel dependencies or modules • Linux has a crazy limitation where you can’t get page permissions without parsing text!
  • Downsides of Iron Heap • No code available (yet, just PoC’s are built) • Really leans on 64 bit • Browsers aren’t entirely fantastic at that yet
  • What May Work Today: Diehard(er) • Interesting secure allocator from Emery Berger @ UMass • Advanced ASLR implementation w/ out of band heap metadata • Supports Linux and Windows (and Mac, to some nonzero degree) • Suppresses Use After Free, mostly by delaying free (similar to MS approach) • Available today (you do need to recompile with --disable-jemalloc) • Probably worth exploring for Tails etc. • ASLR, NX/DEP, other methods assume corruption has occurred and try to enforce limits to make the corruption unexploitable • UAF blockers actually prevent the initial corrupting moment • There’s nothing going on in Infosec that’s killing more 0day
  • The Plan • Firefox needs a story around memory hardening • It’ll take quite a bit to get them to give up their incredibly efficient jemalloc • First step, let’s get something production worthy for “special Firefox installs” • Tor • Tails • Then let’s look at getting it mainstream • “Like millions of 0day cried out all at once, and were silenced” • Dr. Emery Berger has agreed to join this effort – we’re going to make something production worthy!
  • Need To Start Thinking about DDoS Again • 1 >100GB flood in 2013 • 114 >100GB floods in 2014 • Generally, we think of DDoS involving botnets firing their hordes at some poor web server • The largest and most damaging floods are actually involving spoofed traffic that’s reflecting off Internet DNS and NTP servers • No DNSSEC involved or required • Important to understand why spoofed reflection is so dangerous
  • Why Reflected Floods Are Nasty • Here’s a network. • Lots of routes around the core • Fewer routes from each source, to each destination
  • An unreflected packet stream • Packets are always taking the “fastest” next hop to the target network • Bandwidth == maximum available for the slowest single hop (wherever that is)
  • The Reflected And Spoofed Packet Stream • It goes out to everywhere, spreading the load across all the inbound hops • Everyone replies to just one place, via all inbound links • Boom.
  • What To Do? • BCP38/URPF • Ideally, there wouldn’t be networks that allowed you to lie about who you are • Every network link would have a computed range of addresses it could speak for, and that would be that • This actually is a complete success in the DSL/Cable world • Not saying we shouldn’t continue to strive for BCP38 compliance • At least 2300 networks w/ no BCP38 compliance • Can we do more?
  • Stochastic Tracing: Bringing Back An Old Idea • This is at least 12-13 years old of an idea (roughly, I’m adding some quirks) • The Internet and it’s problems were different that long ago. What wasn’t ideal then might be good now • Basic idea: One out of every 100K-10M packets, a “tracer” goes out declaring a hop along the route that sent traffic • Sort of like a “background traceroute” • Irrelevant network load for normal streams, but DDoS would become traceable instantaneously – the bigger the DDoS, the faster degraded networks would identify themselves • Implementation receives a GRE stream of a small subset of packets – well supported • Probably a way to do this w/ command line haxory as well
  • Maintaining Privacy and Security • Tracing traffic only goes to hosts that should otherwise be able to see the original payloads • Destination IP for all traffic • Source IP for DNS and NTP (and any other blind responding protocols that pop up) • Haven’t decided what the payload would be – ICMP, HTTP, etc. • To be on the open Internet is to get random stuff, so the only question is what’s easily catchable by a DDoS victim • One variation: Allow alternate destination to be declared in Reverse DNS space • IN PTR • The space could be used for more than just PTR records • Could declare an alternate destination for tracer traffic, or flags for format / multidrop • Still under the authority of the IP address • Obviously should sign tracer payloads with ED25519 keys, possibly declared in DNS
  • Long Term Vision • Reduce the time between receiving a DDoS and tracing the involved networks • Ultimately, automatic suppression of DDoS (though this gets really messy really fast, because so many hops on route see enough to send the shutdown signal) • There are large enough floods growing at fast enough frequency that work like this is necessary • And since Paul Vixie would kill me if I didn’t say it, please update your DNS server to support RRD, which dramatically reduces DDoS participation
  • What I’m Really Nervous About • Can’t just not talk about the NSA revelations • Not here to recapitulate Bruce Schneier et al. • I’m nervous about people’s reactions to them.
  • Competing on Political FUD instead of Technical Merit • “Sure, you could go with that competitor, that’s cheaper, faster, nicer, more powerful…but OOGA BOOGA $NATIONSTATE” • Seems ridiculous, until you hear proposals to keep all traffic to and from a country, within that country • “Sure, BGP does say you should use this route, because it’s cheaper/faster/nicer/network neutral/peered…but the law says you have to keep it within our borders where we’re the only provider, because NSA” • Again: The Internet was not the first attempt at large scale Internetworking! It was the first one that worked at large scale, because everything else demanded gatekeepers and rentiers • This is the network made by nerds, not by bizdudes and certainly not by politicians • It seems to work pretty well
  • Not to put too fine a point on it, but… • We didn’t exactly want the Chinese Internet, or the Iranian Internet, before Snowden • I’m hoping we don’t want that now.
  • The General Reaction To NSA Corrupting Crypto Standards (DUAL_EC_DBRG)
  • The Nuanced, Even More Disturbing Reality • There’s a reason we were asking the NSA’s advice to avoid cryptographic vulnerabilities • Most cryptographic functions do not need the NSA’s help to be fatally flawed • That comes for free • It takes an enormous amount of time and effort to find functions that might not be flawed, and the NSA seemed good at telling us when we were wrong • DES • It’d be nice if we had a governmental department focused on defensive operations… • Or maybe a conference…
  • Even Obviously Good Advice Is Being Treated With Deep Suspicion • NIST: Heh, Keccak (SHA-3) is actually too slow. Performance matters. • Zooko Wilcox-O’Hearn, creator of Tahoe-LAFS (probably the best cryptographic distributed file system out there right now): Heh, Keccak (SHA-3) is actually too slow. Performance matters. • Community: NIST IS DOING MORE EVIL
  • …are we about to replace NIST with DJB? (Who keeps releasing fast crypto functions)
  • No, seriously, I’m actually a huge DJB fan • I’ve been advocating Curve25519 for use in a TLS-replacing protocol for years • But, uh, fewer trusted resources in a time of great need is not helpful • “Linux scales. Linus does not scale.”
  • The Problem 0 0.5 1 1.5 2 NSA DJB Mission Count
  • We’re All Kind Of Waiting For The Other Shoe To Drop • Not implementers, everything new keeps using DJBsec • Academics aren’t really allowed to “know things” without some degree of evidence • Science! • We do know that DUAL_EC_DBRG is compromised, but nobody was using that except other feds (sadface) • Pretty much everyone assumes the NIST P curves are compromised • Big complex process to turn a “nothing up your sleeve” number (like pi) into a safe ECC curve • Innocent string used by NIST P curve is c49d360886e704936a6678e1139d26b7819f7e90 • Every shirt has two sleeves • NIST took the time to declare two curves, a P curve and a K curve, almost as if they assumed eventually there would be a problem and it’d be nice if the standard could survive it • The above does not pass scientific epistemology. Nobody cares.
  • The Larger Issue • It’s not like cryptography worked particular well before the NSA revelations • The hard problems in crypto are not in the math, or the implementation. They’re not even in “side channel vulnerabilities”, as Shamir has said. • Key management is the hard part, because that actually touches users. • Cryptography keeps getting seen as Math and Implementation. There is a third aspect • There is a complexity that generally goes unmodeled: Operations
  • My First Law of Operations • Every project must be measured by the number of meetings required to accomplish it • If there is a meeting with someone outside your department, it’s 5x the cost. If outside your company, 25x the cost. • When’s the last time you saw meeting count in a crypto function? • We have to learn to respect usability, and the sacrifices it may demand • Alex Stamos at Yahoo backing PGP for their mail interface is actually a pretty big step – Yahoo knows UX
  • Summary • Don’t let an attacker touch your storage – S^X • Actively gather true entropy and kill the non-CS PRNGs with fire! • Represent entropy in a way other than bit vomit • Browsers become vastly more secure once UaF is prevented • The NSA crypto fallout continues • Yahoo’s PGP news is fantastic • And…just one more thing
  • It’s Apple Related, So I Totally Get To Pull That Line • “Safari now blocks ads from automatically redirecting to the App Store without user interaction. If you still see the previous behavior, or find legitimate redirection to the App Store to be broken in some way, please file a bug.” -- IOS 8 Beta Release Notes from Apple • YES ACTUALLY, THIS DRIVES ME CRAZY • “I know you’re trying to browse this website, but I bet what you really want to do is buy our ****ty game” • **** THOSE GUYS • Apple wants help? Hackers, lets do it • And not for free
  • Third Party Bug Bounty • I’ve become a believer in bug bounties • This is new • External bug quality can sometimes be of poor quality • If you want richer bugs, maybe we should enrich bug hunters • White Ops hereby sponsors an enhanced bug bounty for this particular Apple bug • $5,000 to the first autoredirection bug finder • $2,500 to the second finder • $1K to the next five finders • We offered $250 six weeks ago, sorry, my bad • We believe hackers can help fix things, and we’re putting our money where our mouth is