# B-Sides Wellington Day 2

Another sunny day in Wellington for B-Sides. Not that this is anything exceptional about this, the weather is alwys like this why do you ask?

## Influencing Meat Puppets through Memes

Simon “bogan” Howard

There are some Kiwicon traditions that have been carried across to B-Sides, and splendid headwear is one of those things: Simon prowls the stage is a blinged out admirals hat whose magnificence cannot be described with mere words. In lieu of flame pots, he walks on stage with cardboard flames on sticks being waved.

bogan begins by explaining how much he loves his Datsun 120Y; apparently everyone else does, too, at least judging by the way people leave love notes on it, or stop him in the street to tell him about their feelings for his car.

bogan wants to talk about elections: while his political engagement when younger was (allegedly) limited to pranks with billboards, his interest in modern politics has been piqued by the move to influence elections online.

We had an election recently: there has been advertising in newspapers, TV, and your letterbox - but none of this is really relevant to bogan’s life, nor much of the audience; mailers go in the rubbish, and who watches terrestrial TV? And while there was online advertising, it doesn’t appear to be heavily targetted. Comment volumes on Stuff increased, there was a touch of Facebook advertising.

David Hood did some social media analysis, to understand if the kinds of concerns we’ve seen overseas seemed to be relevant to us; he looked at a number of factors, particularly at time of day and sentiments. All of this was pretty low-volume, happened in New Zealand office hours, and link sources mostly in New Zealand - to local news sources rather than, say, Russia Today. It all suggested relatively little, if any, external interference.

Contrast with the US:

• Clinton spent $140 million, while Trump was mocked for an online-only strategy, where he ignoring traditional TV channels. • In fact, Trump was generating net revenue from things like a Facebook TV, with the donations exceeding the cost of operations. • There was a whole meme army, running memes and ads that would, bogan notes, be a breach of the conference CoC to show in many cases: explicitly racist violence, people being blown up, for example. • People started running analysis on the influencers on social media: one example account, “David”, had been active in 2002, mostly UK based. Which was odd, because while “David” claimed to be a Brit, he was active almost exclusively during Russian office hours, and “David’s” topics were Ukraine, Trump, UKIP, Brexit, and migrants. • The favourite topics shifted over time. • bogan showed us a picture of the “The Troll Factory” building in Russia, where many of these sorts of accounts and campaigns are believed to be orchestrated from: • 90 staff working on the Trump campaign. • Only a$400k budget, but over 100 million page views per week as a result.

bogan thought through what would need to be done to make this work in a New Zealand context - how hard would it be? What would it cost? What tools would you need?

• The Russians started campaigns at least 18 months out from whichever election or vote they were trying to influence:
• Create a team.
• Run sock puppets to build a network and credibility, dipping toes in the water.
• Analyse effectiveness, and tighten up the messaging as the election nears.
• bogan has an actual Russian troll army job advert, pulled from a Russian job site.
• The right people: linguists, psychologists, artists and designers, an ops team, machine learning experts, and hackers (because everyone needs hackers).
• The US millitary central command created the JP13 paper, buy the way, that describes how to do this. It’s fantastically well-described.

bogan has created an org chart, of course. All the staffing can be outsourced in this modern day and age: pay in crypto, conduct online interviews, and keep it cheap.

bogan recommends fakenamegenerator.com as the one-stop-shop for creating fake profiles: it will create a tolerable history for a name, age, and work history.

• Don’t use default dates like 1/1/1970 though, the social networks are onto that.
• Similarly your fake profile pictures should be manipulated a little so a reverse image search won’t simply find the source. Note that cropping and mirroring the photo is usually sophisticated enough to defeat Facebook’s smarts.
• Also make sure you have addresses that actually exist - 123 Fake Street, for example, will tend to trigger suspicion.

Other considerations:

• Proxies, proxies, proxies. You want it to look like you’re coming from the place your fake ID says it’s coming from.
• Also don’t run hundreds of fake IDs out of your normal ISP, that never ends well either.
• Burner phones for 2FA. You want 2FA because it dramatically improves the credibility of your fake accounts with social media sites.
• You can get Twilio for SMS 2FA, for example; unfortunately Twilio doesn’t have a point of presence in NZ; luckily bogan has an answer for that, too.
• The E1552 USB GSM modem can be had for five bucks a pop (off TradeMe).
• bogan found someone online who is selling voda SIMs for fifty cents a pop (off TradeMe).
• You could use socat and bang on the serial port with AT commands, or you can use SMS Server Tools, which gives a nice framework for sending and receiving messages and makes it easy to automate seeing 2FA challenges
• Base accounts: outlook.com, Facebook, and so on.
• Use the mobile versions of sites; they’re easier to scrape and automate than the regular full site. Greasemonkey or selenium are your friends.
• Social media management is a pain in the arse to do by hand.
• There are well-known, good tools to do this. Many of them, though, are probably likely to help law enforcement or intelligence agencies with their enquiries.
• MonsterSocial, on the other hand, is an excellent bot management platform which is “really neat” and “much less likely to help law enforcement.”
• Eventually your accounts will get blocked: you’ll need to burn the assets and setup new numbers, proxies, and so on.
• Once you’ve got your bots online, you’ll want them to make some new friends; bogan likes to use OCEAN personality tests to match bots to potential buddies.
• Yes, bogan notes, OCEAN is rubbish, only covers about 56% of the population, but if it’s good enough for Cambridge Analytica, it’s good enough for bogan.
• If you don’t know who Cambridge Analytica are, you should and you should be disturbed by them.
• Involved in many major elections and campaigns such as UKIP.
• Heavily linked Breitbart and other far-right groups in Britain and the US, as well as indirect links with Russia.
• Cambridge Analytica has 3,000 to 5,000 data points on every American.
• Obtained by monitoring social media, orgs like Facebook are only to happy to sell them your data.
• A lot of the targeted info for these comes from those “what kind of cat are you” type of survey, many of which are run by Cambridge Analytica.
• Assisted with Trump and Brexit campaigns.
• Currently under investigation in the US.
• They have the University of Cambridge API to pull down details on your targets to provide profiling data for your sock puppet army.
• Getting social media data in NZ is hard. While facebook claims 2.9 million users, Twitter pulls only 500,000, and there are Stuff comments.
• While people “waste an hour of their lives a day” on Facebook, engagement with social media is probably shallower than most countries.
• The Parliament site (parliament.govt.nz) has helpful per-electorate demographics, which is nice for tailoring your campaign to swing individual electorates.
• The sockpuppet army is disturbingly easy to create: Facebook’s suggest a friend feature gets massive uptake with very little time or effort.

• People blindly accept friend requests.
• If you can get accepted by a bot network, your numbers will go through the roof.
• bogan demonstrated that, once the initial few bites are made on the bait, a bot will quickly hit the 5,000 friend limit, and the memes will begin to be liked and shared.
• Content creation should be tailored to the the OCEAN profile of your targets.

• Don’t leave a financial trail: use free campains for AdWords, for example.
• Direct marketing via content posts are gold.
• If you’re going to run a Maori-bashing campaign for one group of voters, make sure Maori voters never see it.
• Ditto gay rights, immigrants, and so on.
• Facebook makes this trivial despite it being of dubious legality in many countries, and in spite of their assurances they’d fix it.
• You want to perform sentiment analysis on the responses to understand how to tune them over time.

### Countermeasures

bogan notes that some people advocate for a high-road for education; he thinks this is unlikely to succeed (causing him to cry “rainbow tears of laughter”). Moreover, it’s unlikely electoral commissions and the like can keep up with emerging technology, so it seems unlikely that closer government supervision will solve the problem.

Consider, though: Facebook makes over 40 billion dollars in advertising. They’ve been minting money without paying for the externalities; their business model is predicated on automating everything, and letting society deal with any of the shit that fails, opening them up to being gamed in the ways Simon has descibed.

Maybe they should be making less money and forced to do more human curation. Make them wear some of the costs and consequences of their actions.

Who Targets Me? - collects your advertising profile and tells you who’s targeting this; trying to inject transparency into this. The German Marshall Fund (GMF) are monitoring troll armies, but struggling to get on top of this. Simon notes that the Germans generally seem pretty clued up about these danfers to democracy.

The Facebook Ad Police: they’ve hired 250 people to identify people that they’ve liked troll factory information. It’s a small step, but hopefully it grows.

Twitter: “doing a great job as always” he says sarcastically.

“We need to force these big companies to do better, even if they make less money.”

## Understand & Combat Ransomware using Cyber Kill Chain

Amanda Berlin

The “cyber kill chain” is a term invented by Lockhhed Martin, and describes 7 steps to counter attackers with an in-flight incident. Amanda notes that for many people it’s not really clear how this maps from a conceptual framework into a practical set of things to do1.

The intrusion (or cyber) kill chain is intented to

1. Recon - the attacker is looking for info.
2. Weaponisation; prepare an attack based on this info.
3. Delivery of the weapon to the tailored attack; email, USB drives, whatever.
4. Exploitation.
5. Insulation - maintain persistence in the environment.
6. Command and control - link back to a common command and control network, rather than leaving things to be autonomous endpoints.
7. Actions on objective: ransomware encrypting files, data exfiltration, or whatever the end game is.

Amanda has a giant spreadsheet that she opens to comments; she notes that some attacks are pretty hard to fit into this model.

(She also refuses to make the talk of a read-through of the spreadsheet, to the profound relief of all those present.)

Microsoft have a little “attack kill chain” loop to describe the move from initial steps to gaining domain admin, then using domain admin to do interesting things, as a specific example. This is in part a response to the problems of mapping that high-level model onto specific actions and counter-measures. It’s a good idea.

Having set the scene, Amanda wants to use ransomware to give us a concrete example of measures and counter-measures.

### Recon

A lot of people want to skip the recon, because they think there are no counter-measures; Amanda notes that counters to recon exist; one is to use the time your adversary is doing recon productively: make sure things like your response strategy, your policies, and so on are all in place. In addition, there are specific measures that will make recon harder for attackers: limit re-use of corporate assets, like email or devices, for personal use; be clear about what sort of info can be shared on LinkedIn and so on.

Similarly keep eyes on the outside world: when, for example there’s a wide breach of something like LinkedIn or Uber, check whether your corporate emails have been used. Consider forcing password resets in case people have been using common passwords everywhere.

Build honeypots - you can use these of threat intel. If your honeypots are being triggered, well, you know there’s a problem there and then, right?

Use specific alerting: categorise who might be a high-profile target. Amanda offers the example of The Harvester which can scape the ‘net looking for domains and addresses being out there on the Interent. If you can find high value targets within your organisation easily, so can your attackers.

### Delivery

Defensive mitigation of ransomware payloads is hard; a good starting point is asset management. It’s hard to defend if you don’t know what you own and use.

You also need to get a good understanding where you can switch things off across the enterprise: things like screensavers, powershell, visual basic, and so on, will take out a lot of risk - but you need to be concious of users who have a legit reason to use high-risk files; your finance team probably need Excel macros. Your HR team need to be able to read documents from untrusted sources (they’re called “people you’re trying to hire”). You need to permit that, but for targeted users, and with lots of education and support to help those people make good decisions.

DISM is a Microsoft tool which will allow you to roll policies across your fleet to change e.g. the action that Windows will take with a file type; mapping dangerous files to harmless actions as part of your group policy will make your life a lot easier.

(Amanda recommends never using the default domain groups in a fresh AD setup! Create new domain groups rather than modify the defaults.)

You also need to make it easy for users to report on what they’re seeing. If they can tell a security person that something is off, you can avoid a lot of incidents; if you’re hard to contact, they won’t bother.

Ad-blocking can be valuable, but also hard to implement.

### Exploitation

So someone has downloaded the bad thing sucessfully; a macro, plugged in a USB stick, what have you. Hopefully you have some sort of endpoint protection that notices PowerShell shouldn’t be launching from Word documents.

You need to make sensible GPOs to block well-known bad behaviours like macros on Excel. But you also need to make sure that the folks in accounting who need macros to do their job, can. With that, you need to make sure they have the training and support to understand how to do that safely.

Monitoring is another key strategy: force everything through proxies and check for known malware URLs that the payload might try to reach, for example.

### Installation

The payload has now installed sucessfully. Defensive mitigations: keep backups! Test your restores! Have a proper DR plan! If ransomware runs and you can safely trash a system and restore from backup - you’ve won.

Also, consider that sometimes you may just have to pay the ransom. Be clear about when you need this - but have it in your IR plan as a last resort. And have an IR. If you don’t have one, have a relationship with an outfit that does. In one sense you’ve lost, sure - but if you’re still in business at the end of the attack, well, it could be a lot worse.

### Command and Control

In order to stop the installed payload actually reaching out to get instructions to run an attack, you can cut it off from the C2 network.

Consider using DNS sinkholes: stop DNS lookups to known bad actors, and use sinkhole lists to inform your firewall policies. You can augment that by encouraging your users to submit suspicious information, like phishing competitions. If you are going to block suspicious activities, make sure your list of suspect protocols, IP addresses and such is sensible and well-curated.

### Actions & Objectives

So the ransomware is running: consider using honey directories. Ransomware wil walk your file tree, which means if you have for example a  directory the ransomware starts there. If you use a honeypot directory (honeydirectory.ps1 for example) with a recursive structure, the ransomware will spend a lot of time encrypting a structure that doesn’t matter. Use the advanced audit policy config options of windows to look at object accesses into your honeypot directory - it will give you a heads up that bad things are in the process of happening.

Hacker Hurricane gives you a bunch of cheat sheets on what sort of advanced detection you can perform on Windows platforms

Even something as simple as big uptick in file accesses is useful.

### Testing & Proof of Concept

Don’t just switch things on and assume it’s working. Test it. If you have pen or red team testing anyway, make sure your measures and countermeasures will be getting testing.

Do tabletops - go through your runbook in a roleplay exercise with a good moderator to test how well you understand what you’ll need to do. Amanda has participated in these drills - and one company was hit within a few weeks of the exercise. It happens!

## Project Walrus

Daniel Underhay

An Android app for card cloning; Daniel is working with Matthew Daley on this project. It comes out of his work red teaming, in particular physical attacks where bypassing physical controls is the objective. As such, Walrus is focused on cracking access control systems.

Some cards are easy to clone; others are hard. There are two main operating frequencies. Daniel notices that HID Proxmark cards are “bad sign”. There are a ton of card cloning devices available:

• The Proxmark 3 is an industry standard tool.
• The Chameleon Mini only does high-frequency cards, but stores them up to 8 to a device.
• The Tastic RFID thief takes this a step further, and will copy the data onto an SD card, and has a range of 0.5 metres.
• BLEKey is for mounting behind the reader, and helps crack more complex cards that include handshaking as part of the access protocol.
• The MagSpoof emulates magstripe cards.

Difficulties: there are no common tools and controls; every device is a bit different, and different levels of reliability. Many of the devices are pretty unreliable, frankly, and they’re very user-unfriendly. Daniel wants to get to the point where he can use an app for that.

Enter Walrus: easy scanning of cards, along with a database that allows for selecting from a large number of stored cards. A quick PoC demonstrated the principal was sound, and then things took off from there. They started wireframing out a proper application, putting the app onto github. It has drivers for a number of devices, complete with per-device tunables. It supports USB and Bluetooth connectivity.

Disclaimer: bad actors ahead. Only clone cards you’ve been given permission to clone.

Daniel has a short movie: The cloning kit can fit in a small hipster messager bag, and Daniel is able to demonstrate a literal walk-by attack, reading a card from a victim at normal walking pace as the pair go in opposite directions through a hallway door. The app shows the new card, adds it to the library, and Daniel replays it to access a secure area.

Pretty impressive.

Lauren Flutey

This talk is based on Lauren’s honours research.

People tend to hold firm perceptions about what a “hacker” might be. Lauren has two key research questions: what stereotypes do the general public have of hackers; and what are the social, governmental, and business implications of these perceptions?

Note that Hackers was part of the literature review! As well as a set of interviews that provide some detailed qualitative information. One security professional, the rest a broad range of professions and backgrounds.

### So what were the findings

1. Awareness: Everyone was confused and didn’t know what the CERT do.
2. Technical understanding: most people understand piracy (910). None of them knew about VPNs, either. A big disconnect between their personal security and the cyber crims.
3. 710 relied solely on external media; only 3 used personal experience, comparisons with other forms of crime, research, or other sources. 910 were frustrated with the available information - but most were doing nothing to improve their understanding anyway.

### Percieved Characteristics

1. Technically smart.
2. World of Warcraft.
3. General nerd.
4. Young.
5. Awake all night.

### Event Awareness

1. Kim Dotcom.
2. WannaCry.
3. Edward Snowden.
4. American Elections.
5. China.
6. Wikileaks.

Complaints about news coverage, quantity, clarity, quality. But people weren’t doing much about finding better information.

## The CIO’s New Process

Andrew Hood

Andrew has a book and unicorn hat. He’s going to read us a book which, he stresses, is entirely ficticious.

I am not going to do Andrew’s talk the injustice of making hamfisted notes of his wonderful storytelling. You’ll have to wait for the video to be published.

It did sound terrifyingly familiar though.

## Protect Yourself Against the Bees

Katie McLaughlin

OK so this started with Bohemian Rhapsody as rendered through animoji. Audience participation was encouraged. The audience failed at this a bit which is a hazard of presenting to New Zealanders. It was awesome, though, complete with correct choiral direction by Katie.

Katie runs through a little background on emoji: emoji render differently on every platform; Katie uses bee emoji to demonstrate how this looks in practise: while the bee emoji are standard-ish on many platforms, the variations can change per Android vendor and over time within a given desktop or mobile platform.

Note that Facebook stickers are not emoji! Animoji are not emoji! “This is a perversion of emoji trying to rob your privacy!” Emoji are specific set of standardised encodings, not random images.

Every time Katie submits her emoji talks she gets different and usually terrible results trying to put emoji into the title form of conference’s submission process. Software crashes, cuts titles short, printouts don’t work or end up with boxes in place of emoji. Depending on the rendering of emoji, you can pinpoint particular versions of people’s software (for example, that the smartboards at a conference centre are running unpatched Windows 8). So can we use bees to hack things?

Well, with enough time and effort you can attach microprocessors to real bees to track them (the Internet of Stings!).

Native emoji can even break your phone: for example, the pride flag is a composite an extension of the white flag emoji with extra glyphs - an it could crash iOS; the “effective power” emoji has had the same issue.

Emoji are legal in domain names, complete with emoji to punycode mappings. You can use left-to-right and right-to-left mappings along with emoji glyphs to create convincing-looking domain names.

So the ingredients are there. But Katie has a specific example for us. Are we, she notes, familiar with WordPres?

Hands are raised.

Have we heard of MySQL?

Hands are raised.

Do we run MySQL make sure you have STRICT_ALL_TABLE set?

Why? Because if you use UTF8 and drop an emoji to pop in, for example, an HTML tag, well, this is legal HTML. It would parse correctly. Unfortunately versions of WordPress prior to 4.2 using MySQL with STRICT_ALL_TABLE not set will behave poorly. Specifically, everything past the unicode will be dropped by the parser, giving malformed input that will allow you to enter arbitary XSS attacks (for example) from the comment fields; this is documented in CVE-2015-3438.

So yes: getting Unicode wrong can lead to hacks.

John Grant

Works for Tekron, making GPS for the electrical grid. It’s pretty critical infrastructure. They have GPSes with 10 year warranties - which means they need upgrading for things like leap seconds during their lifecycle.

This causes problems where people lose their admin passwords (because they were installed a long time ago and the devices are set-and-forget) but need to do updates. Traditionally these were sent back to New Zealand to be reflashed, which is not ideal - for example, in one case an entire station had to be powered down while waiting for the time source to be reflashed.

Additional complications: many of these devices are airgapped. Clients won’t take instructions over a phone, because they’re worried about social engineering. So how do we let people reset the admin password without sending the device back, or having Tekron fly out.

An arming device, which then power cycles the device and does a factory reset. Since a reset of the device triggers and alarm bells, and requires physical presence, it’s deemed safe enough.

The challenge for the future is trying to make all this work in a world were people want to connect it to the Internet.

Jen Zajac @jenofdoom

People who know these things use password safes and enable MFA. Most people are unaware of these sorts of practises - and blaming them won’t help. In fact, it’s also our fault! When we see a list of garbage passwords, we should think about the fact that this is also bad validation.

We often give bad advice, focusing on capitals and numbers instead of length, and we disallow many characters, limiting complexity. We don’t update the minimum password length, and have poor maximum lengths.

zxcvbn is a JavaScript (CoffeeScript) password tool. It provides real-time feedback about the strength of a password, expressed in terms of how long it would take to break a password, along with plain English warnings and suggestions about how to improve poor passwords: that something is a common password, for example. It also returns a score out of 4, to let you make a strength bar UI element easily.

It’s pretty easy to use, and Jen only heard it about it this year. She would recommend this to anyone building front-end registration pages, and she would strongly recommend using something equivalent on the back ends, because it’s bad when different parts of an application behave differently to the same input.

## Māori Cultural Ethics in Digital Security

Karaitiana Taiuru @ktaiuru

Why consider Māori ethics and cultural issues?

• Te Tiriti/The Treaty; this is New Zealand’s foundational constitutional document.
• The United Nations Declaration on the Rights of Indigenous Peoples. New Zealand is a signatory.
• Arguably the Bill of Rights and Human Rights act.
• We’re also well ahead, as a nation, of most places on this issue, and it would be great to stay this way. We should be proud of it, but not get lazy.

Māori have many traditions around recording images and stories via carving, weaving, and other arts and crafts. These form the basis of thinking about how Māori cultural information and language ought to be used and respected in the digital realm.

While older generations have limited engagement in IT, in large part due to under-representation, younger Māori will, as they come into the workforce, will expect and demand more culturally appropriate use of technology: generations are coming through Kōhunga Reo, for example, and both technology and Māori culture are second nature to these generations.

Moreover, New Zealand society is changing: Polynesian and Māori are a larger and larger proportion of the population, alongside a swelling Asian population. The bar is going to be raised on what is considered acceptable.

### Seeking Contracts with Iwi and Māori Organisations

• Pronounce Māori words correctly; if you blunder your way through names, particularly of people, iwi, or hapu, you are unlikely to win contracts. Do your homework.
• Typically look for relationships based on trust. Māori organisations are generally looking for long-term relationships, not necessarily the cheapest or something that looks cool today.
• If everyone refers to someone as an Aunty or Uncle, you should do. It might feel weird if you’re not used to it, but it’s important.
• Iwi money is for intergenerational member devlopment. Iwi are generally very cautious about how they spend their money.

### Māori Society Overview

• Mana Whenua - the iwi and hapu traditionally of an area.
• Iwi, Hapū, Whānau, individuals.
• Kaumātua/Kuia - an elder, accorded particular respect; but note that not every old person is an elder.

### Stereotypes of Māori

• Not all Māori have been raised in Māori culture; over the years a history of assimilation has cause a die-off in understanding of language and culture.
• Learning the language does not imply being well-versed in the culture. Anyone can learn a language; immersing in the culture is a different thing.
• This is particularly important: just because someone has a brown face doesn’t mean they will give you the right answers.

### Indigenous Data Sovereignty

• Iwi data should be treated like other sensitive types of government data: it should be managed carefully, consensually, and with appropriate controls and access2.
• Digitising images and books should be handled carefully; in many cases data has been provided to ethnographers on the understanding it has specific, narrow uses. Transforming the data and derived works may not be considered acceptable. Ask, don’t assume.

### Authentication

Much of the advice around authentication systems breaks when considering a Māori demographic.

• Mobile phones are typically shared on prepay plans. Assuming 2FA can push to a mobile device as “something you own” is a bad assumption.
• Computer ownership is low.
• High use of free WiFi.
• Marae and rural areas have poor internet access.
• Names based on ancestry are silly: everyone on a marae will know things like “mother’s maiden name” for everyone else.

### Naming Conventions

• If using a Māori name, you should consult to make sure you’re using it correctly. Don’t just use it because it’s cool.
• Use of personal names, god names, and place names can be sensitive. Simply because something is popular doesn’t make in acceptable for use (consider the haka Ka Matē, which is considered offensive in many regions due to Te Rauparaha’s actions against them).
• Keep a consistent theme to avoid conflicts.
• Be careful about shortening names. You can easily create an offensive term from an inoffensive one.

### Staff Photos

• Photos of the living must be stored seperately from photos of the dead or images of landmarks.
• Photos on the web can be sensitive - Māori feel it can result in loss of mana. Be sensitive to these concerns.

### DNA

• Tradition is that DNA is the iwi, and is considered to belong to the iwi, not the individual.
• The eyes, likewise, are linked to the ancestors and considered sensitive. Retina-based ID will be considered offensive.
• Voice has no particular connotations, so voice recognition is OK.

### Facial recognition

• Tā Moko and Moko Kauae are personal stoies of genealogy and are extremely sensitive.
• Copoying and storing Tā Moko, Moko Kauae, or other Māori tattoo is offensive and can be considered grossly offensive, particularly when used out of context.

• Consider licensing options; in general licenses with preserve ideas of open access are important.
• Be sure to inform organisations of the ramifications of proprietary licenses, the possibility of licenses which may copy, transmit, or encyrpt and modify sensitive information.

• Diceware can create very strong passwords, but at the risk of inappropriate words.
• Be careful when shortening names - comman cases like InitialLastname can, for example, create deeply offensive words.

### Te reo Māori Phishing and Scams

• Targeted attacks are now becoming more common.
• Happily Google translate does such a bad job at the moment that scams are easily spotted.
• However, if scammers improve Māori may be at risk of convincing-seeming scams.

### Social Issues/Future Risk

• There are no culturally specific Net Safe resources.
• Mis-trust of authorities means that people will speak amongst whānau, not to say CERT when attacked.
• Cyber activism is tacking traditional protest online.
• The IoT could present significant risks: more detailed tracking, particularly in the home, will seem very disturbing to Māori who have often been on the painful end of government intervention.
• We already see Māori web sites and digital infrastructure will be attacked when there are high-profile stories around, for example, Waitangi Tribunal claims.

## Design for Security

Serena Chen

“Or you could say… white hat social engineering.”

This is challenging the idea that usability and security are on opposite ends of a spectrum: in fact, the opposite is true. You cannot have good security and usability without the other. In fact, Serene argues that good security is a subset of good usability. “By working together we elevate both of our fields.”

“If you’re here, you probably care about security … you get security, you know why it’s important. But if you stop someone on the street, they probably don’t. And they shouldn’t have to. And they shouldn’t need to in order to deserve security.”

We need to stop everyone needing to become security experts - literally no-one cares about security, unless they’re watching Mr Robot at the time. Users care about the task at hand. “No-one wants to be assigned extra homework to message a friend.”

When we present security as places of friction, extra actions, what do people do? What happens when you pop up an “Untrusted connection dialogue”? We click “proceed anyway.” We will try really hard to install malware on our own computer. It’s easy to look at all these situations where people bypass warnings and say “users are stupid”. Why are they doing this? “It’s really frustrating to see good people fail.”

“Blaming everyone for being foolish is unhelpful. In fact, it’s downright lazy.”

It’s out job to care.

• We need to empower through education.
• We need to push for small, long lasting changes that people will actually use.
• And the policies need to be useful. Password rotation just leads to a list of rotated passwords stuck to a monitor.

The perfect is the enemy of the good. And it doesn’t help that security is generally the last thing considered by product teams.

The best solution is the one where you don’t write any code: “Security features should be invisible.”

So what does that mean? In security, Serena notes, we put up walls. You want to connect to another computer? Too bad, you can’t, maybe don’t do your job.

Don’t build walls: carve rivers. Make the path of least resistence be the right thing. But how?

### Finding Intent

“Security opposes the desire to make things easy.” But it’s not the designer’s job to make things easy, or security team’s job to make things hard. It’s the job to make a specific task, done by the right person, easy. Everything else can be locked down.

Get specific about, and understand the user’s intent. Much of this can be inferred: what is the time of day? Where are they? What is their mood? Their behaviour.

DON’T FORCE PROMPTS.

You create false alarms. It trains people to ignore alarms. This is not an opinion. This is science. This has been studied extensively - literally with MRI machines. People’s brains adapt to prompts, and reduce processing after a few repeated false positives.

We can use our wealth of data to inferr who and what is going on, and draw people down the right path - which is both easy and secure. We keep our security goals aligned with the user’s intent.

Note the zeroth order path: do nothing. The state of doing nothing should be secure by default, like a blender that won’t turn on without the lid being attached.

The first order path: user behaviour is guideed by affordances. Can you easily find and interact with user settings for security? Does the same apply in the workplace - is it easy to interact iwht the security team?

Also: if people don’t understand the consequences of an action, don’t let them do it. At all. Not in advanced settings - people know how to Internet and find out. If you need to let people have the option of a thing, you need to communicate it…

### (Mis)communication

Whenever there is a miscommunication, there is a vulnerability. Take, for example take the green lock in Chrome. What does it mean? It means there’s TLS that matches the domain name.

But users think it means “secure”.

And it doesn’t. It just means a DNS entry and cert align.

So what is their mental model of what is going on, compared to yours. Patch your communications to reove ambiguity. Because it’s the user’s mental model that defines how they interact, and defines how secure a system really is.

System security is a property of the user’s mental model and the system being in alignment. So we need to understand the user’s mental model. This doesn’t mean we need to read their mind. You could, for example, try asking nicely. Or watched a user go through your applications or processes. Sit in on your designers’ user sessions; talk to your designers!

• Consider customisation: letting users assign their own identifiers, for example, means they understand what they’re doing better.
• Communicate what’s actually happening clearly and simply. Communicate the security context they are currently in - is this area secure or not?
• When we make things, we teach. When people interact with thing we made, they change their behaviour. The path of least resistance becomes how they will interact with things. You are teaching people what to do
• For example, iOS and OS X devices continually prompt for your iTunes password, seemingly at random. Thi just trains the user to enter their password when they see a password, which sets them up as easy prey for phishing attacks.
• Technical problems in security are hard enough, which makes it easy to forget about the human element. Communicating security through the thin layer of design is hard.

### Takeaways

• Collaboration between design and security is rare. It shouldn’t be! Let’s be friends!
• Our job is about security outcomes.
• Align our goals and our user’s goals.
• Make the path of least resistence be the thing we want people to do.

Great talk. See the video when it’s released!

## When Bugs Bite - why neglecting your edge cases can kill

attacus

“I love bugs.” Not the living kind. The code kind. “I love that bug hunting lets me be a professional cynic.”

“Every time I start a new piece of work … I try to work out what we don’t want it to do. What if?”

Lily attended Monash University; in front of the Civil Engineering building, there is a garden with huge, twisted pieces of metal. They come form the Westgate Bridge, which during construction, collapsed in 1970, killing 35 people. Monash University aquired the metal to remind the civil engineers of the consequences of their mistakes. “It’s one of the few things I still remember from Uni.”

This is great for civil engineers. But it’s hard to achieve a similar monument in the field of software. It’s tricky to describe software disasters. But we can try, looking at failures of safety critical software, like nuclear control or medical systems - for example the Therac 25, infamous for dosing patients with an order of magnitude more radiation than they need. Another example is the London Ambulance Disaster of 1992, where a buggy release of dispatch software caused ambulences to simply fail to show up where they should, with at least tens of people dying as a result.

Unfortunately, these stories don’t come up in our world, so we don’t really think about them. We tend to think our work, which doesn’t run medical equipment or summon ambulances, so it must be irrelevant to our problems, right?

Well, consider this unicode bug: a Turkish couple in the middle of a bitter divorce were exchanging SMS messages. Because their phones didn’t support the Turkish character - an ‘i’ without a dot - it displayed an ‘i’. This changed the meaning of a word from an innocuous one, to a vial insult. Amin - the wife - and her family ambushed and attacked her ex-husband. Fighting to protect himself, he killed is ex-wife - and then himself.

A GPS error: using a GPS with a decades out-of-date map, the Sanchez family got lost in Death Family. By the time they were found, the 11 year old son had died of dehydration.

Tesla’s so-called autopilot feature mis-recognised a block of colour, and the car drove under a truck, killing the driver.

This is not because any of these technologies are inherintly bad. SMS and GPS have probably saved far more lives than they’ve cost. And no-one was trying to ship software would cause fatalities.

But bugs can kill.

And people trust us. Really. And sometimes we fail them. Software is not magic - we have front row seats. We know this. But most people don’t, and our merketers try to convice them that it “just works”.

Everything can be a safety-critical system. If you can remotely control your house, order car rides from strangers via the Internet - these are things that can end in disaster.

Think “what if?” What if a burgler takes control of your house? What if your glasses from the Internet are faulty and you can’t see properly? What if your car ride allows unregistered dangerous drivers to pick you up?

Perhaps you think this is melodramatic. Well, perhaps. You can’t live like that all the time. But you need to think about “what if?” enough to build fault tolerant systems. Work like an astronaut: when you make mistakes it can end badly, so they study the mistakes of the past. They develop responses to every scenario, and practise responding to problems until disasters and emergencies are boring - what Chris Hadfield calls “the power of negative thinking.”

How do you become a land astronaut?

1. Put on your black hat. Not the hacker’s black hat, but De Bono’s black hat: discernment. Logic applied to reasons for being conservative, for managing risk. If you’re an optimist, you need to work up to a black hat. If you’re a pessimist you probably need to work down.
2. Never underestimate a bug. It is important not to dismiss the value of a negative mindset so long as it is realistic, useable, and practical. Set aside time to fix things. Take the time out to cater for the edge cases, to clear away the problems “everyone knows” about but don’t fix.
3. Remember that users are people. Put faces to these outcomes. Don’t shame our users. Humanise. People cause problems because they trust us. They think we can perform miracles. We need to be worthy of their trust. Our brains aren’t really built to care about large numbers of people we don’t really know about - so we need to work at it. Switch up the words you use: don’t talk about “users”, talk about “people”.

You can turn this into a Friday afternoon game. Start with the one mistake you make that could kill someone; then scale back from there.

But don’t forget: this isn’t (just) a game. People can actually die from bugs. Astronaut thinking is a tool in Lily’s toolbox; it can be a tool in yours, too.

## Metlstorm’s Empiricism Emporium

So because I suck and organising my laptop ran out of battery right now. Which is a shame, because metl’s talk was a tour-de-force, and frankly I’m not conviced I can or should try to re-create an hour and a half exegesis of the security industry by someone who is a great presenter.

What’s that? You want to hold my beer? Are you sure? Well, if you insist.

Metl kicks off by talking about breaking things3, the essence of hacking. How many people have hacked a thing? How many people have hacked a chain of things? How many people have escalated up a chain of things to own a better thing? And so on and so forth. Metl reminds us that, given the number of hands in the air, this makes us a very unusual group4. Most people have never done this, or have any idea how to start, much less what it looks like. And most of the people making decisions about what to do about the risks of being hacked fall into that much broader group, which, metl posits is a problem.

Adam is careful to note that this is not about to be a launchpad for an exercise in contempt culture: he doesn’t want to demean or belittle people who don’t have this fairly obscure and specialist knowledge. But he also doesn’t want to understate the scope of the problem - the social problem - that he’s going to describe.

Systems are complex, and the opportunities to break into them are manifold: organisations often secure the wrong things, focus like a laser on one problem, while ignoring easily exploited vectors. Organisations under-estimate the risk of being breached, chucking things on the risk register (which, metl notes, is a place pipes loves to make a beeline for: a nice, easy-to-read description of literally every security hole in an organisation, all gathered together in one usually poorly secured place!).

Not only is it disturbingly easy to break things, it’s disturbingly easy t stay in there once you’ve broken in: to steal a kerberos golden ticket that will let you wander back in for 10 years, for example, knowing, as metl does, that most organisations’ response plan doesn’t involve steps like “rebuild Active Directory from scratch”. People don’t just underestimate the weaknesses of their organisation, they underestimate the time and cost of recovering from a breach.

On top of that, they underestimate the complexity of their environment, and how vulnerable they are to supply chain attacks on the things they depend on; metl gave three specific examples:

1. Datacom manage the domain names for most, if not all, government departments in New Zealand. The list an administrative contact and a technical contact for all those records.
• One name metal asks rhetorically: do you think they warned her she is now a target? What special training do you think they gave her, given she’s now a point of compromise for criminals and hostile intelligence agencies looking to breach New Zealand’s government infrastructure?
• metl noted she doesn’t show up as working at Datacom any more. So who knows?
• The technical contact worked at Datacom for more than thirty years, which is awesome long service, but he’s definitely left. Before he left, metl asks again, what extra aid and assistance did he get to reflect the fact he’s a key point of compromise for anyone attacking the NZ government5?
2. Domainz is still the registrar for many pieces of critical NZ infrastructure, public and private. Except they’re now owned by MelbourneIT. Domainz are now a web marketing firm - but they still show up as being responsible for some of our most critical infrastructure. Do you think they people who work in web marketing are aware what targets they are? Do you think they’ve been warned? Maybe their parent company told them, yeah? Except, Adam notes, MelbourneIT or their subsidiaries have been involved in numerous high-profile international breaches. One a mere two days before the start of the conference.
3. Deloittes: yeah, let’s talk about Deloittes, whose entire back end network was owned, completely. metl is confident that you should treat anything you’ve given them - your documents, your strategy conversations, the lot - are now in the hands of attackers. Not only that, if people from Deloittes are on your site? They laptops, their phones, the passwords they use for their email and they re-use on your site? They’re all compromised6. But they’re giving businesses and governments security advice and running audits and telling them how to run their processes and build code and operate infrastructure anyway7.

Furthermore, metl notes, the more you think about it, the worse it gets. Where are you hosting your stuff? On servers build from componenets all fabbed in Shenzen? In a cloud, built with the same components, but where you aren’t allowed to see what everything is built on or how it’s operated? When you roll IBM managed services, IBM who are worth more than NZ, what do you think happens if you ask for an audit of what they do or how they do it? They laugh. You can’t even properly test your risks any more! You don’t understand them, so you sure as hell can’t evaluate them.

This is not a pretty picture. And metl doesn’t have any glib answers, much though he’d like to. There are specific things you can do to help; for example, if you are going to outsource or use a managed service, make them allow your red teams include them in the scope of their testing. If you are going to use an MSP, and they’re too big to care about you, use a smaller and more local one.

And, because metl wanted to end on a high note, he pointed out: most people aren’t actually dicks. Civilisation hasn’t fallen apart. People (mostly) don’t go around breaking shit, even though they could. So maybe it’s not as bad as all that.

1. This sounds a lot like the OSI model. Is there something about having 7 layers that does this?
2. If you’re already dealing with data privacy legislation, for example, you shouldn’t be balking at this.
3. Well, really I suppose he starts with some reflection on the experience of being at a con, rather than running one. But let’s skip to the delicious meat of the talk.
4. I think that’s in the ‘select’ sense rather than the ‘odd’ sense, but you know, little from column a, little from column b.
5. This is not a theoretical list; in the wake of the Snowden leaks, one example of US intelligence activity was the “I hunt sysadmins” briefing, which explained that a great target for breaking to places is the home life of sysadmins; by compromising home networks and personal devices, there’s a vector into their workplaces.
6. This rings true to me. I’ve been given feedback from Deloittes reps that I shouldn’t worry about the breach because it’s, like, a totally different part of the company.
7. But that’s OK, you signed a contract with them! I mean, your customers will totally forgive you if hackers destroy their stuff because you had a contract. That will make it all OK.