Kiwicon 8 Day 1

The registration was pretty smooth considering the sheer number of people (over 1000!) attending. And thank you to the people who were circling the registration queue with food. And the merch was awesome: the designer outdid herself with the 80s design, and I got a goddamn audio cassette for my tag. Speakers got a betamax cassette, which is even better, of course.

It wasn’t really the first day, since I’d had a couple of training days beforehand, but this was the opening of the conference proper. It looked like it was going to be a little low-key compared to last year, since they had a local metal band, and this year it was metlstorm talking. But you can’t have a metal band every year.

Adversaries Real and Imagined - headhntr

…but you can have a DeLorean delivering the keynote speaker for his talk. About the most 80s thing you could do, right?

Morgan Marquis-Boire does infosec for a variety of public good non-profits and jounalism organisations, which seems like a pretty awesome way to use you hacker skills.

Unfortunately this meant that I ended up finding his talk pretty damn depressing all-in-all. With a broad overreaching tone feeling that everything’s fucked - the threats are numerous, you can buy them for relatively little money - the key takeaway is that there is now an abundance of people selling the ability to perform what would once have been considered NSA-level infiltration capabilities to hideous governments to use to assist them in doing terrible things to people. That’s their business model. It’s somewhere below the morality of the average arms dealer.

The good bit, I guess, is that people like Morgan are fighting the good fight, and there are countermeasures (up to a point).

Cisco ASA as a Super Mario Adventure - Alec Stuart-Muirk

Often the most entertaining and comprehensive hacks seem to be by people who start out by explaining, as Alec did, that they aren’t really security people.

So this one was more terrifying to network admins than to me: a detailed, step-by-step walkthrough of how to remotely exploit Cisco’s newish advanced firewall systems, use them to gain full access to an internal network, and do so in a way which is persistent and difficult, if not impossible, to detect.

If you have Cisco ASA firewalls, run and apply patches now, while you still can.

The reasons are more interesting than the individual exploits; the exploits paint Cisco in a terrible light for two reasons.

Firstly it turns out that Cisco are terrible Linux admins (leave on ALL THE THINGS! like it’s, well, 1989), which is relevant given they’re shipping Linux systems as black boxes: they use an old (2.6.x) kernel with known exploits available. The default state of the ASA is to have many, many services running, even if you aren’t using them (Alec’s initial entry point for remote ownage was a web-based design tool for branding the UI with your corporate colours). This is the sort of stuff that, if you were interviewing for a hire, would see you cross someone off your list.

Alec made the point - one I agree with, based on the shit experience I’ve had with “appliances” from a variety of companies - that you ought to be suspicious of anyone offering you black boxes. They’re all software somehwere, and if you can’t see it, that just means you don’t know how terrible a job they’ve done of managing it on your behalf.

More of a surprise is how dreadful Cisco turn out to be at network programming.

  • The ASAs are sold ostensibly understanding up to layer 7 of many protocols for better management.
  • Literally every protocol handler has CVEs against it.
  • Every time Cisco add a new one it gets at least a DOS CVE. (There are some proofs of concept for pivoting these into real exploits on other Cisco products.)
  • The ASA’s high availability protocols are unauthenticated and unencrypted. This is bad. Like, “will accept any packet claiming to be a management packet as valid” bad.
    • Some authentication is optionally available, but if you enable it, the ASA will still accept unauthenticated protocols.

All awful, awful stuff. I got to feel smug because I work in an ASA-free zone.

Asymmetric Defence - narc0sis & wepiv

An interesting talk, one where it felt like some of the conclusions were at odds with the major theme I took away from Kiwicon 8.

narc0sis and wepiv walked us through some basic threat analysis, grouping attackers not by the trendy title of the day, but in broad economic terms:

  • Resource poor attackers who spend nothing on attacks. These attackers are the zero dollar hackers, typically using a collection of whatever they can download for free: free versions of commercialised pen testing toolkits, metasploit and whatever modules they can find.
  • Cheap attackers who are spending “$2500 per year” on their attacks. For this they can access surprisingly sophisticated toolkits out of the box.
  • De luxe attackers, probably professional criminals who are making a living out of hacking, who spend $100,000+ per annum and are running or renting bot nets, paying for sophisticated command and control infrastructure, and getting 0-days from the bad guys.

We were asked to think about our counters:

  • The $0 opponents are a simple technical counter: keep everything patched properly, have firewalls, AV protection. That’s great, but still runs a medium sized business a good $100,000+ per year for tools and people to run them.
  • The $2,500 opponents require a lot more work: more security specialists, better AV, intrustion detection systems, and so on. You’re pushing into $500k to $1 million budgets at this point.
  • The $100,000 opponent requires a huge amount of money to counter. Now you need a proper security team, architects, WAFs, rolling penetration testing, and so on and so forth. The sky is the limit.

The point being that when you engage in asymmetric warfare you’re on a hiding to nothing. Attackers can spend much less than they cost you (in terms of your spending and, if the attacker ever wins, economic costs). And attackers have access to disturbingly sophisticated tools for relatively little money. Doing more of what we do today - more security people, more products, and so on - is not helping.

narc0sis and wepiv asked us to consider alternatives: asymmetric defense. While the attackers’ tools can be quite sophisticated, the attackers are generally very heavily reliant on automation for success: the $0 attacker probably can’t devise new attacks if the out-of-the-box ones fail, for example, while the cash-rich attackers probably need payback to keep their criminal business model viable.

Therefore anything that derails automation is going to make it too costly to keep attacking you: single sign-on was pointed out as one example; if you don’t allow single sign-on while crossing security domains, you’ve probably make automated mass attacks more expensive.

It was an interesting presentation, but one takeaway from it was that the speakers were essentially suggesting what will feel to most people inside a business like an arbitary increase in day-to-day difficulty for security’s sake - and about half the talks I attended were arguing (from the agile/sec ops/call it what you will point of view) that this model is actually funadamentally broken, that your own people will treat you as a problem and route around you. Thinking about this talk in relation to those by ladynerd, hypatia, #octothorpe, and others, it underscored a really stark divide in defensive philsophies.

Seeing Blu - Matthew Garrett

Matthew’s a very, very good public speaker, as well as being very technical.

The upshot of the talk was: blu-ray players are absurdly insecure. Think old kernels. Think running code off USB keys if it has the right name. This allowed Matthew, with a bit of effort and use of such sophisticated tools as strings to change the player’s region coding to one that was more amenable to playing the DVD he owned.

The point was made a number of times that the company responsible for this hopelessly shitty, insecure firmware that’s in the majority of Blu-Ray players on the market is MediaTek, who are starting to sell Android phones. This seems likey to have obvious consequences.

As an aside Matthew discussed the latest “content protection” scheme baked into blue-ray players (on the pain of not being given certification); the scheme involves a low-bitrate corruption of the audio channel of movies, whether (say) played at the cinema or on the discs themselves. In the case of the MediaTek players this is enabled by a kernel module and is fairly trivial to disable via LD_PRELOAD tricks, but your expensivel purchased, allegedly pristine blu-ray 5.1 and 7.1 discs (along with your expensive cinema tickets) still have deliberately corrupted data.

OneRNG - Jim Cheetham & Paul Campbell

  • OneRNG $50 device that provides crypto-quality hardware randomisation.
  • Based around the Avalanche diode to deliver the source of entropy.
  • Hardware and software are intended to be verifiable; they have some interesting mechanisms to try and limit the trust problem.
  • The authors want to “poke their stuff and find mistakes” as it were.
  • They’re running a kickstarter and are supportive of other hardware randomness generators.

There’s just 6K in the software; there’s 256K of space in the ROM. Please customise!

Eradicating the Human Problem - Laura ‘ladynerd’ Bell

I did Laura’s “Bootstrapping Agile Security” course on the Wednesday. I highly recommend it to anyone who is struggling with “rapid deployment” and “security”. While this talk wasn’t directly about that problem, but it was certainly about the same root issue: how we deal with security is limited by how we humans deal with it.

This talk won the conference overall prize, so it’s unlikely my notes will do it full justice; Laura is a good public speaker as well as a top-notch security person.

Some quotes and points from her explanation of the problem:

  • “We have a pink, squishy human problem.”
  • “Let’s stop people.”
  • “I want to make you feel uncomfortable.”
  • Technology is mature, well-understood, comfortable for us. We are failing at people.
  • We don’t consider human risk.
  • We do stupid non-things so we look like we’re doing security.

Worst of all we shame the people who are the victims of human security attacks.

Shame is not how people learn.

(While we, security types and techies in general, do exactly the same things.)

Reinforcing behaviour is how people how they learn: people need to learn by failing safely in a supportive environment.

None of this is news to anyone involved in teaching, education, training, psychology. There’s no shortage of reasearch to back these assertions up.

“People” are not lazy and stupid and unteachable.


Ava is Laura’s big reveal.

Ava is a first generation proof of concept 3-phase automated human vulnerabilty scanner.

Laura asks if anyone remembers when Qualis started undercutting manual functional testing by giving us the tools to automate testing. It was, she says, not very good, but the discipline evolved. Ava is “Qualis for humans.” It’s about depersonalising the human factor, abstracting it away so we can deal with how the system made up of squishy pink things works.

  • Organisations are not hierarchical, no matter what the org chart says.
  • Interpersonal relationships are a complex, easy-to-exploit network. Different people are spreaders of information, and relationships drive who will be considered a trusted source. A malware-loaded PDF from one person might be rejected while another source would be accepted on trust.
  • Ava scans AD and Twitter (and eventually other things) and creates entity nodes. Laura acknowledges that going to outside work activities has a creepy factor.
  • Each node has a rich set of attributes based on the information discovered.
  • Ava works with threat injection: email, tweets, SMS, QR codes, USB. Deliver the threat, understand the spread; we’re doing security testing focused on the human factor. Who opens what? Who’s a trusted source?
  • There are over 1,000 possible permutations of email attack without phishing attacks.
  • Cascades are interesting - forwarded email, retweets, and so one. Ava will visualise the cascade of risk.
  • Allows you to measure the impact of training over time by watching patterns - this is a huge one, incidentally. Very little training actually measures impacts. This is an area where Maire could have a good old rant with Laura. It’s an industry problem.
  • Predictive risk behaviour could become a thing. Then you can plan specific defense to known weak spots.
  • But it’s not about firing people.
  • Help manage people’s access - who should be able to see what?
  • Pivoting though people. Understanding an attacker.
  • Based around Django, Postgresql, Celery, Redis, Bootstrap. (“Don’t try to learn a new language while trying to develop a new tool.” Glad to see even very smart people sffer from yak shaving.)

To break out for a moment: the intent of Ava is laudable, and the technology Laura’s put together is awesome, even more so considering it’s been written in “one hour a day” around a two year old’s nap times (personally, I sleep when my two year old is sleeping, which I guess is one reason Laura is on stage and I’m in the audience…).


I’m really uncomfortable with a couple of things here; one is the idea the my employer should be sniffing around my outside work activities, no matter what the security benefits are. Yes, people I work with can (and do) read my blog or look at my twitter feed. Quantity, though - the automation and mass analysis of behaviour - can change the quality of those interactions.

The other is that while Laura intends that quality to be more abstract; to force people to think in terms of human behaviour at a systemic level so we think about the human security factor in the same terms, and I’m sure a well-run organisation could use it that way. The problem is that, per the next talk, there’s no shortage of arseholes, and this all too easily turns into making examples of people (all too common) and the general fuedalisation of the employer-employee relationship.

(It’s not like she hasn’t thought about this - she was pretty much solidly engaged with people people after her talk bringing up exactly these sort of points.)

I hope Laura’s vision takes more hold than my rather mopey one.

Crafting An Effective Security Organisation - Rich Smith

Rich is the head of security for Etsy. His core message was, in a different but equally emphatic fashion, the same as Laura’s: in security, people matter.

  • 600+ employees, $1.35 billion in sales.
  • 200 countries performing transactions.
  • Offices in 8 countries.
  • Etsy’s success is built on its engineering culture:
    • Empower the edges (e.g. programmers > managers).
    • Trust but verify.
    • JFDI.
    • “If it moves, graph it.” Easy visualisation makes problems shallow.
    • Everyone can push to production anywhere.
    • 40 - 50 pushes per day, with multiple changes per push. Hundreds of changes per day.
  • Why do this?
    • Fail early lets you solve more problems quicker and cheaper.
    • Everything is easily A/B tested.
    • Improves mean time to recovery.
    • “We basically use our production environment to test.”
    • Yes, there are failures - but small failures in a stream of many small changes are easier to find and fix than a few big, chunky fixes.
    • Essentially this will be familiar to anyone working from XP, Agile, or similar systems.
  • How do you “security” this?
    • Stopping people doesn’t work. People flow around obstructions like water.
    • Apply “if it moves graph it” to security.
    • Make security data available to everybody - make it everybody’s business.
    • Security engineers are developers - there’s no seperation. Security fix things, developers fix things.

Principles of Effective Security

  • Enabling - a good security team should be measured by how many thing they enable, not how many things they stop. Be supportive and security driven.
  • Transparency - the security team should be open about what they’re doing and why they’re doing it. Hiding “sooper s3kre3ts” helps no-one.
  • Blameless - security failures will happen. Crucifixions are not helpful - refer to “no-one learns from shaming” earlier.


  • Security faces the same chalenges as dev and ops, so draw lessons from the devops manifesto on how to get things done.
  • Security shouldn’t be a blocker. Don’t be lazy arseholes. “Sorry if you identify with that.”
  • No should be a last resort. People will actually listen to your no then.
  • Help people implement their awesome ideas. Then people will want to talk to you.
  • Build trust and relationships. Have designated hackers who you represent and who represent you. Translate from one language to another. Become an SME.
  • “You only become a blocker if you’re the last to know.”

Progressive Security Culture

  • Understand that security is a people problem as well as a technology culture.
  • This is poorly understood. If anything security people and security conferences tend to be looking down their noses.
  • The industry has donr a bad job of engaging and helping people.


  • Don’t hire arseholes. Even if they’re rockstars.
  • If you hire one, fire one.
  • Abrasive individuals will destroy your brand.
  • The more diverse the team, the more approachable is.

I’d add two observations: “don’t hire arseholes” is complicated by the fact that the worst (i.e. most destructive) arseholes tend to be really good at interviewing well and managing up.

The second is that there are technical security types, governance security types, and the IT equivalent of the American “mall cop” stereotype. And just as the mall cop is the guy who really, really wanted to be a cop or special forces type and works out frustration at their failure on kids at the mall, the security equivalent is interesting is using a position for snooping and petty tyranny.


This isn’t the same thing as education. It’s about building relationships - movie nights (Hackers!), low pressure environments, “managing by walking around”.

It’s critical to actually get people talking to one another. Have a coffee, have a beer.

Wrapping Up

At this point I needed to noodle off and do daddy things, which meant I missed the apparently extremely poor taste presentation that got a couple of presenters kicked for various breaches of the code of conduct. Unfortunately it also meant I missed the apparently awe-inspiring talk on how to own GSM devices like, say, home detention bracelets, and remote controlled skateboards. And even worse, the dramatic reading contest which I totally would have entered if I was going to be able to hang around.

On the other hand, everyone at home loved my slow cooked pork and apple cider.