Kiwicon 10 Day 1 Morning

Kiwicon X has an X Filesish theme this year that leaves me feeling like I should be wearing a suit, Agency-style. But I probably won’t.

Anyhoo, Kiwicon continues to grow and grow apace. This year north of 2000 souls will shuffle, slumps, and/or glide their way into a Wellington which has been hit by natural disasters (“Welcome to Wellington. Here are your complimentary earthquakes and floods; if you look to the left, you’ll see a flock of trampolines passing us by”). Which is a pretty remarkable accomplishment; I hope we all remember that one of the things that makes Kiwicon so awesome is how stupendously nice everyone is, as a rule.

Also: if your conference doesn’t have a warning sign above the door saying “Pyro and lasers in use”, your conference needs to lift its game. Just remember not to point your fancy imaging device at the last show unless you enjoy losing pixels.

(Today’s note-taking device is a cellphone with a USB keyboard and an OTG cable, because I mind “looking like an idiot” less than “keyboard over bluetooth at a hacker con”.)

The Truth Is In Here

metlstorm

“They took me, man. They scanned my ports!”

Mere words cannot do the alien abduction lightshow justice. Not gonna try. Also, pyro.

“We sold 2188 tickets. We have lasers. It’s pretty sweet.”

“There are food trucks. You saw the bacon truck outside.”

“If you find people and noise overwhelming, there is a quiet toom if you need to get away.” This is awesome. More places should do this.

Keynote: Defending the Gibson in the Age of Enlightenment

Darren “sham” Bitby

metl: “If Google gets owned, it’s Darren’s problem.”

“We went through an externally-motivated change. You can look it up under Aurora Google.”

“Google has a number of computers. Looking after them is something of a challenge. Infosec is not really a science; there’s a lot of magical thinking.”

“With 600 people you can get a lot of things done, so working out what to do is a problem.”

“What’s your threat model. Mossad/not-Mossad?” It sounds snappy, but unfortunately there needs to be a little more nuance. Darren’s model runs from not-people - worms and the like - through to many people who aren’t very good at computers, which is our current state of security - it’s what we’re good at. We’re not good at defending against people who are good at computers and well-organised.

We’ve seen escalations; hacking become involved in elections in NZ and the USA. Old things like DDOS have become new again. But we’ve made progress with HTTPS becoming more common.

The “I love you” virus was a big thing in 2000. In 2016 some of the highest-profile breaches have been the same attack vector. We have many of the same problems.

So let’s look at some debunked ideas.

Potion of Passwording Harder

Complex, regularly-rotated passwords. That was great in the stone age, but it’s not really adding value any more. In fact, they make it harder, because it pisses off users.

So passwords don’t work; enhancing passwords things like password safes doesn’t help, either, because you need to know your e.g. password safe is entering passwords to the right destination (too easy to end up putting your password into a fake web site). So what’s the answer?

Universal two factor; FIDO-U2F: a token that enters credentials for you. Wide industry support, the way of the future.

The Crystal of Network Omnipotence

You worry about being penetrated, so you attach a Cyber Divining Rod to your network and rely on it to “secure your network”.

This is not a terrible idea, but attackers now shift data by your whitelisted Office 365 site. Or a laptop on a boat.

The network is not a line of defence. You have to secure the host.

Have a look on the Google BeyondCorp Initiative. Clients are admitted based on their attested state.

The Talisman of Non-Clicking

This is terrible advice. They work as well as you’d think a cyber-abstinence program would. “Don’t open attachments” is advice that just doesn’t work.

The list of “safe sites on the Internet” is localhost.

The problem is that we’re giving people systems which can’t go on the Internet safely. Flash had 314 disclosed remote code exec vulns in 2015. And Flash is just one example.

Imagine you managed a fleet of cars for a company. Would you buy a fleet of cars from a manufacturer whose cars burst into flames every week? Especially when their answer is that they think they’ve fixed it this week. Like they say every week.

So what’s the soltion? Sandboxes, memory-safe languages. More granular sandboxes, automatex sandboxes, robust sandboxes (e.g. micro-virtualisation).

The Amulet of Evil Enumeration

  1. Get a copy of all the things.
  2. Recognise the bad onse.
  3. Make signatures.
  4. Distribute everywhere.

The answers are:

  1. Nope.
  2. Turing nope.
  3. Nope.
  4. Eventually.

Also, anti-virus is actually not entirely useless, but it’s like a canary. But instead of acting like we’re miners with a canary and running out of the mine when it dies, we say “Boy the canary died. Just as well it inhaled all the poison gas and died, let’s get back to work.”

Software distribution is a problem. People go to google, seach, and click and click and click until something installs1. We can’t really tell what’s safe. So what do we do?

  1. No more general purpose computers on the Internet.
  2. Sandboxing and whitelisting.

Consider some new metrics

  • A count of how often things run on your network with no provenance.
  • Number of lines of unsandboxed code exposed to untrusted data from the network.

The mjg59 Smile Time Variety Half Hour

Matthew “mjg59” Garrett

Matthew had offered 3 talks to choose from., So, in metl’s words “We asked for the full Matthew this year. All the talks, all at once.” Can Matthew manage 90 slides in 30 minutes?

“The IoT is terrible, there are botnets everywhere”, and then you discover that uPnP helps them breach your home network security.

  • uPnP is nice in that it they help with things like service discovery and enumeration. All over HTTP.
  • A lot of devices are using the Intel reference uPnP server, which is terrible and insecure, since it trusts what you pass it, and it does what you tell it. And it’s always running is root. Happily some manufacturers have a proper webroot.
  • Things are often run by cron. Or, in some entertaining cases, lpd.
  • Vendors don’t care about security.
  • Firmware compromise is scary. A device which has firmware compromised simply can’t be fixed. Ever. You can reboot all you like, but you won’t be able to reset itself to an original state, ever.
  • TPMs are a mitigant to firmware compromises. It makes it hard to run persistent compromises. They can also be used to encrypt and seal secrets.
  • It can even be used to defeat some evil maid attacks.
  • TPMTOTP is an implementation of this combined with TOTP to give you a varying code at boot time which attests that your device has not been compromised; the downside is you’ll need to enter a six digit code every reboot.
  • People love unsalted MD5. Unless they love XORing credentials with a google-findable search more.
  • People understand that, say, cameras in the home should probably be pretty secure.
  • There are problems with remote access. Do you port forward? Terrible. Do you use a cloud intermediary? Also terrible.
  • Matthew outlines an ideal solution for how you’d make this work:
    • Many devices use their serial number and a default password to register with the remote access cloud. (Many people put pictures of their devices online, so you can find serial numbers on images.google.com. Don’t take pictures of your new IoT devices and put them on the Internet.)
    • Then you can claim to be someone else’s device, register with the provider’s cloud service, and harvest other people’s credentials as the cloud sends them to the remote camera. Or your device, as the case may be.
    • Again, the TPM can be used to attest that the remote camera is the one it says it is. The downside is your now strongly tying your identity to specific devices, and you’re also no longer able to change the software on your own devices. So remote attestation is pretty unpopular.
    • Can we do that in reverse? Have the remote OS and individual containers attest to who they are.
  • Lightbulbs are awful. Thwy love to breach your network via the cloud.
  • IoT security is funny, right up until it’s not. There are terrible problems, and terrible people will take advantage of this. So it’s actually serious business.
  • Selling weak security to non-technical users is negligent.
  • Not all skilled hackers are state actors, and not all victims are enemies of the state.
    • Corollary: tools are available to unskilled hackers, but written by skilled hackers.
    • I would add: a common use of what purports to be child monitoring software is stalkers and abusers tracking their victims.
  • We need to worry, not just about journalists or governments, but everyone.

Hacking HID iClass

William “Amm0nRa” Turner

  • HID make a lot of access control mechanisms for buildings and such.
  • Their older products have been broken for a long time.
  • iClass is their new generation. It’s a prorietary RFID protocol which is incompatible with standard readers, with a per-card derived key.
  • There are, unfortunately, many successful attacks. Heart of Darkness was a notable paper showing how you can get the global master key that is used in every reader worldwide.
  • The chip used has a code protection mechnism that will prevent the memory and firmware from being dumped, but unfortunately it has a partial-protect mode that allows this to be subverted by taking the memory out piece by piece.
  • HID have come after researchers with lawyers to try and cover the release of the master key.
    • It’s not available online, although it has been disclosed on the past.
    • They’ve been amazingly successful - William noted the key has been available online in the past, but now appears to have been scrubbed from anywhere he can find it.
  • The Proxmark3 library has code that will allow you to work with HID cards if you have the right reader and key.
  • HID sell an Omnikey reader which will let you do this, as well.
  • AmmonRa disclosed the key as “3F90EBD0910F7B6F”.
    • This key will not work with the “High security” readers, which are site-specific.
    • He reminded us that he’d quit his job, so there was no employer to threaten, and moved to China, and suggested that he felt the chances of HID winning an IP court battle in China were slim.
    • There was much encouragement for the audience to publish the key as widely as possible.
  • But the high security readers have known flaws which leak parts of the site-specific master key when you use your card. The Proxmark3 repo can assemble a full key over the course of a few minutes. So HID devices are fundamentally insecure.
  • Finishes with a note he’s flying to China at the end of the day, and a live demo of a hack of the Christchurch Metro site.

Finding and Exploiting Access Control Vulnerabilities

Collin Mulliner

“Why do we have GUIs? Because normal people don’t like shells.”

  • Shatter Attacks are an old attack on classic Windows environments, causing corruption to smash out of security controls. Microsoft has largely mitigated this class of attack.
  • Collin is interested in alternative GUI attack possibilities.

What are UIs?

  • Widgets and windows.
  • Widgets have attributes: read-only, write-only, visible, invisble, blocking, non-blocking, etc.
  • Access control is a fundamental of security. You want to prevent unauthorised access to an application.
  • It can be tempting to implement access control in the GUI, for example, disabling buttons that the user isn’t supposed to access.
  • Widgets can be manipulated directly to bypass orthodox security controls.
  • You don’t need deep knowledge of the application to make these attacks work; you’re using built-in functionality of the application, in fact.
  • These ideas and attacks are common to every GUI.

Types of Attack

  • Unauthorised Callback Execution.
    • Hooks a callback to allow execution of disabled functionality.
  • Unauthorised Information Disclosure.
    • Get a handle on the root window, and make sub-windows (and the data therein) visible to unauthorised users.
    • gemtools_unhide.exe
  • Unauthorised Data Modification
    • Data that unpriveleged users can see, but only authorised users can edit.
    • Modify the widget to be read-write.
    • Collin’s tool can walk the UI tree, identify read-only elements, and mark them read-write for you.
    • WinSpy++ gemcolours edition.

Technical Requireents

  • Applications must be executed by the same OS user.
  • Attack steps:
    • Discover UI elements
    • Obtain window HANDLE for widget
    • Manipulate widget
  • In Windows the Win32 APIs are simple and well-documented, which makes this easy to attack.
  • In Windows, everything is a window, and everything is based on Win32 primitives.
  • While .NET is managed code, it ultimately relies on Win32 APIs to implement drawing and input.
    • There are a few execptions that require some specidal handlers.

Fundamental Problems

  • False assumptions:
    • GUI cannot be changed externally.
    • Widgets are protected.
  • Attackers must be sophisticated.
    • In fact, with these attacks, you just need to be able to use the application once the attack tool has unlocked it.
    • Very little technical skill required,
    • Certainly no reverse-engineering and so on.

The only disadvantage is that it can be a bit tedious.

The GEM Miner

The GEM miner automatically explores applications for their protected and unprotected elements, stepping through the application and recording the results.

The attacker can then look through the GEMs to see which ones are of interest to them.

As far as Collin knows they are the first team to do this kind of analysis and attack in a systemic way against GUIs. The great advantage for the attacker is that it uses the ease-of-use of the GUI against the defender: other techniques, such as decompiling an application, require the attacker to become intimately familiar with the inner workings of the application before being able to attack and exfiltrate data. With the GUI, once the controls are subverted, the attacker has all the ease-of-use of a legitimate users.

Radiation-induced cryptographic failures and how to defend against them

Peter Gutmann

  • If you get faults in generation, bad things happen.
  • ECC is very susceptible to this sort of thing, and most faults will cause the private key to leak.
  • RSA is also theoretically vulnerable, but doesn’t usually leak keys.
  • AES and other symmetric systems don’t tend to suffer from this problem, either.
  • But faults in the random number generation can cause problems.
  • ECC and AES-GCM are very common and fashionable, but also the most fault-prone.
  • Glitches can come as from a variety of sources, but processors mitigate some of them.
  • Research papers tend to focus on lab-induced faults which don’t mirror real-world faults.

So why would you care about radiation-induced faults? Imagine you’re monitoring nuclear facilities as part of non-proliferation work: monitoring waste disposal, refueling, and so on. And you want security.

There are many other SCADA-type applications, some of which have radiation faults, or other sources (for example, electrical faults). Or perhaps you’re just operating at high altitudes, where radiation exposure increases: at 2,300 m radiation is 6 times as intense; at 30,000 feet, aviation computers recieve 150 times as muich radiation.

(Peter cautions that the IEEE Spectrum article is extremely inaccurate; beware of your sources on this topic.)

Things can fail in weird and wonderful ways:

  • On a PIII or K7, the fans fail before the CPU.
    • The PIII took 100x the radiation of the K7.
  • Bits can be stuck for seconds or minutes, or it can be a single, one-off event.
  • Intel and Power chips are heavily tested and hardened by way of comparison.
  • History lesson: lithium + protons = fusion bomb.
    • Which means over time your lithium batteries become a radiation source interfering with your computer.

Solutions

  • Lead? Not really practical. It’s heavy, it’s hard to shield everything, and lead doesn’t actually shield every radiation source.
  • Also, shielding doesn’t make the energy of particles disappear; for example, beta particles hitting lead turn into X rays.
  • And nothing stops neutrons, which “leave a trail of destruction in their wake”. Steel can slow them down, but it contains nickel, which the neutron bombardment converts to something which “isn’t steel”. The metal becomes brittle, and full of alpha-particle emitting helium as the neutrons decay the nickel.
  • But you end up needing lead as a backstop for things. Which means you’re dealing with weight and inconvenience, and oh, lots of lead (e.g. in solder) is recycled from shielding, and hence radioactive.
    • Whoops.

Radiation-hardened devices exist. Peter used the example of an op-amp which is good up to 100 krads, one hundred times the fatal level for humans. It’s expensive, fails to neutrons, and is export-controlled.

Another solution is to build fault-tolerant systems, e.g. processors acting in lockstep. But it’s expensive and not really practical.

Counter-intuitively, as processes shink, ICs don’t becaume more vulnerable. The smaller volumes of material trap fewer particles, which counterbalances the smaller components’ sensitivity.


  1. A cynic would note that Google could fix that if it stopped, I dunno, taking money from people advertising dodgy driver sites, and then blacklisted them from the list. ↩︎

Share