purplecon 2019

Purplecon started last year to with a relatively small audience that I was too slow at ticket-buying to be a part of, so I’m delighted to have snagged a ticket this year; oriented as it is toward protection more than breaking, it’s very relevant to my interests. This year purplecon has trialled splitting tickets into a general tranche and an under-represented tranche, and it certainly seems to have done a bang-up job of broadening the audience demographic, which is dead cool. It’s easily the least blokey tech conf I’ve ever been to.

Also dead cool is my hand-made knitted purple star for a lanyard (“made by my mum”); and the 9:20 start time. Which is an odd time, but is much more relaxed than scrambling for registration for an 8:30 - 9:00 start.

At the opening the team outline how they’re aiming for a difference, outside of the broader audience and bright colour scheme: talks must be positive, defensive, and actionable. Aiming to be “the friendliest conference in existence” they have a robust conde of conduct and multiple reporting channels (it’s 2019 and this shouldn’t even be noteworthy any more, but it is).

“There are varying levels of knowledge and it’s OK not to know everything.”

In an episode of coolness, they’ve co-ordinated a long lunch break with kawaiicon’s ticket pick-up, and a long afternoon tea break to facilitate meeting people and hanging out together. There are stickers for people to signal which topics they’re most interested in. The pacman circle[^pacman] gets a mention!

[^pacman:] Standing in a circle with a gap, much like, well, a pac-man; this makes it easier and more comfortable for people to join in.

As with last year, there are make-up, selfie, and sticker options available.

how to human in groups


“When I was a child I dashed my head on the stones of Armageddon” - literally. It’s a place in Israel. Another way of telling the same story is to say “when I was 5 I was playing with a friend and fell over”; there are many ways of telling the truth, and the truth is the story we tell. In her day job Maia does agile consulting; this is a form of workplace therapy; in a previous life Maia studied mobile phone safety, a topic which has been researched since around the time of WW II, when we starting using radio. Mobile safety is well-understood, and like climate science and vaccination, the science is pretty well-settled and has been for a long time, but the public doesn’t necessarily agree. Disagreeing with evidence isn’t particularly novel, but these are cases where the individual who refuses to go along will harm the group as a whole; vaccination is the most obvious example here, but it’s equally applicable to security: one person’s poor practise can create a breach that affects the rest of the organisation.

Public opinion within the group - and the group can be a culture defined at many levels, whether a group of friends, a company, a city, a country - can change over time; Maia mentions the Overton Window, the idea that there are a window of acceptable ideas than one can have, not merely to get elected, but in social groups generally. There are ways to shift the Overton Window; this is how we get social change, for better or worse. One way we change that window is repetition: human brains learn by repetition, so simply restating an idea can bring it into normal discourse.

“Normal” is contextual, though: Maia gives an example of a vaccination campaign fronted by a cloth-nappy using, wholefood eating dad: within the in-group he belongs to, this caused a vaccination uptick, because in-groups trust people like them; within that group, a dad who wears a baby, eats organic, and eschews disposable nappies is normal. That said, there will be people who are upset by this kind of campaign: the majority may move toward the idea, but a minority will dig in even harder against it.

In fact, “a fact that is never spoken barely exists”: if you stop talking about mobile phone safety, stop talking about it! Because it turns out that if you mention safety, even approvingly, even from robust sources, people will react against it. Because we are driven by feelings as much, or even more than, facts. And if you’re trying to change, for example, a security policy, you’re creating conflict. Because feelings.

Security is all about telling people what to do, and what not to do. You’re pushing people’s boundaries, about enforcing boundaries - and people rarely like that! We’re imposing change, which leads to hostility and resistance. “Trying to change people’s minds makes me an enemy” - and it becomes even harder if you’re not part of the in-group.

We know some things definitely don’t work: words like “should”, “no”, “you” - anything that’s accusatory or overbearing. Instead, spending time with the group, to let people get to know you, to become part of the trusted in-group. Create positive associations! You can also look for advocates: they’re already part of the ingroup, which creates a short-cut to trust for the ideas.

Small group conversations tend to work best. And when you’re trying to convince people of a thing, pick at least two of true, kind, and useful; moreover, accept that there are degrees of being right; if you demand that people accept everything you think is right, right now, you’ll only ensure they never want to talk to you.

Accept that people are local experts: they are doing things for a reason, solving the problems that they have, not the problems we have. It needs to be their problem! Because at the end of the day, security problems are human problems. PGP is a hard technical problem, sure, but it’s no-where as hard as getting people to use it. In security, we have the problem of looking after other people’s problems. And you need to convince people that you are them, and that your problems are their problems.

That valley in Israel is still fought over: Armageddon has never yet been solved with technology.

iam confused: a deep dive into how permissions work


Here’s how the real world works: you’re trying to get a thing done. You want to deploy a thing. And it’s hard and confusing and stopping you and you eventually give up and give away way too many permissions. This talk is AWS focused:

You have a principal: who can do a thing. That might be a user, group, or role.

A action is a thing you can do: listing buckets, reading objects, writing objects (for example).

Finally, there’s a resource - the object you’re trying to operate on.

Put it all together, and you get a policy. The problem is that one policy per person and per resource doesn’t really scale.

You do have wildcards to avoid repeating the same thing over and over. Which are easy to mis-use, sadly.

There are several ways you can do this: one is user-centric policies, where each user has an exhaustive list of things they can do; but you can also apply them to objects, such as an S3 bucket. A bucket can have permission to allow everyone in a group to do things. But note that deny lists over-ride specific rules, but you should use them anyway. You can also apply extra conditions (for example, time), and apply permission boundaries, which override the specific permissions people have been given: even if someone has been given a permission in error, the boundary will over-ride this.

PassRole is another need trick, where you can delegate authority to others, but be a little careful, because one of the things you delegate is the ability to create new roles.

Unfortunately the documentation is… not easy. Fortunately there are AWS Managed Policies, which give you some easy recipies to get you started. Except, as sera notes, some of them have over-broad permissions: for example, the Lambda example allows lambdas global access to your S3 farm and your DynamoDB, which is… probably not what you meant to do.

(For specific examples, sera purplecon are publishing the details in the purplecon archives)

embracing empathy

Shahn Harris/fobski

In a nice touch, Shahn kicks off with a pepeha.

This talk is about Shahn’s journey in infosec: he started working in tech back in 2002, supporting the help desk for POS software for third parties. It’s a tough job: people who have had software foisted on them by head office, and it’s not working. His first human connection in a hard job was someone who found out he was from Taumaranui. People in the branches decided he was “one of us” and would ask for “that kid from Taumaranui”, and it became a good job; a network ops job was next, and then a role in security, because he wanted a job in Wellington: this lead into security in 2010.

Shahn naively thought that he’d swing in and be able to help people. Unfortunately, he was wrong. There was an established team, established ways of doing things - and the way things works created a challenge of conscience. THe company’s way of doing things was not helping customers - it was “fuck you, pay me”. On top of that, Shahn wanted to solve all the security problems in a field that was saturated with technology and tools focus. At this point Shahn started to doubt himself - imposter syndrome entered his life.

The first change was meeting someone who told him to be pragmatic: stop trying to do all the things, rather, help find the right level of solution for the particular org or group. One key to that was to learn to listen: let people speak their passion. Don’t shut them down, don’t tell them that they can’t do things, and don’t reach fo theoretical risks, talk to real ones. Help people do what they need to do in the best way possible.

You have to stay on top of what people are trying to do: if they don’t hear from you, they’re probably under pressure to crack on with things. And if you aren’t having person to person contact, if you’re hidden behind ticketing or chatbots, you’ll never be in the conversations that matter. Moreover you should be helping people navigate the complexity of your technology environment; keeping people in touch with the bigger technology picture, so that changes of direction aren’t an unpleasant surprise will make your job easier, and it will make lead to trust an happiness.

This was great! Shahn was achieving great results for his business, but felt like he didn’t belong, because whenever he went to a conference, he’d hear about cool attacks and breaking things. He’d hear people saying you needed a comp sci degree and a CCSIP and framework qualifications. It was a very bleak period - until he found “light in an unlikely place” at a new job in 2017: he went to a global conference doing a management exercise. It doesn’t sound like fun, but he was sorted into an empathic group of three others (the majority of the people in the exercise were authoritarian). It was the first time he’d hit the phrase, and the first time he’d found a peer group in the infosec world. Which was great preparation for the next thing that happened: because he was working for Equifax, and their breach hit.

Shahn notes, by the way, that if you think that a breach is a great opportunity to get shit done, you’re wrong. A breach is the worst way to do security transformation.

Shahn’s approach was not to start hammering people internally: it was to work out how to help people achieve the new global security goals. Their plans had been thrown into disarray as everything ground to a halt, but they felt Shahn was in their corner, trying to apply the new security standards and policies and processes in a way that would salvage their customer relationships and deliveries.

state machines state machines


Programming hard! Not just making your code run (although that’s hard enough), but there’s a nightmare of concerns around compatibility, reliability, security, and so on.

Robustness is a key concern: the ability to work reliably, often under adverse conditions. An example of robust software might be a microwave: these are all over the world, and world reliably. He decided to implement his own microwave in software. Rust is the obvious choice, because rust eliminates mistakes, right?

It didn’t go well.

Their first mistake was not checking the door was closed. The next was not stopping again. The first would nuke you. The second would set the food on fire.

So William took a step back and tried another approach: describe the states the microwave should be. There should never be a state where, for example, the door is open, and the microwave is cooking. This leads to a table of states, which then leads to a set of transitions that move you from one state to another.

One way you could implement this in rust is via an enumeration: all the states go into an enumeration, and then use the rust match function to identify and act on valid state transitions, and default to a null action when there are no actions. You can also implement this in C, although he cautions this is really, really hard to get right, not least because you can’t be sure matching state transitions with a case statement will be as reliable as a rust match.

When you act, you return a new microwave with a new state. Because each instance uses a set of empty structures specific to state. You can’t take actions that don’t exist, and this will be caught at compile time if your code tries to put it into a bad state.

This passed all the tests first time. There are no unexpected outcomes.

This is great for microwaves, but it has a more general set of applications; it’s great for databases, for example, where you can constrain behaviour based on running through the correct sequences of states.

You can build a number of state machines: deterministic and non-deterministic. The microwave is an example of the former; in the latter, we only map out valid states, not all states. We might apply a rule that says certain input is never valid, and error immediately; this allows us to map out only the states we expect, rather than trying to manage all possible states.

They have written up a tutorial to help you get your head around this, because guided doing is a great way of learning; it’s supplemented with information about how to map out tables and diagrams to help you model and define your program before it gets into the hands of those wicked pen testers.

against lies, h*cking lies

Anton Black/noqry

This is a talk about using statistics to make better security decisions. Anton kicks off with a dodgy survey to demonstrate the flaws in bad statistical modelling:

There are three kinds of lies:

  • lies,
  • hacking lies,
  • and statistical lies.

Statistical lies are the worst lies, because they can steer you into doing the wrong thing based on superficially convincing but flawed “science”. Like rotating passwords every 30 days. Our brain has a powerful connection machine - it’s honed to make connections between events. Unfortunately we can draw the wrong conclusion: Anton gives an example from Stephen Universe, where the crystal gems erroneously decide that because the full team turns out for terrible threats to the earth, it must be the case that “more defenders” leads to “worse incidents”, when in fact the causality is the other way around.

Some causation is the other way around: the “confounding variable”. For example, we might observe that companies with expensive firewalls have fewer security incidents. Is this because of the firewalls? Or is it because they spend more money on security generally? Or, in the case of password rotation, companies that do password rotation also follow lots of good security advice, as well as bad ones like password rotation.

Unfortunately security studies rarely follow anything resembling the scientific method. We are not performing an interventional study:

  1. Observe data.
  2. Form a hypothesis.
  3. Test the hypothesis.
  4. Examine results.
  5. Make a conclusion.

Rather, we’re just doing:

  1. Observe data.
  2. Form a hypothesis.

Of course, even interventional studies aren’t perfect. An Achilles heel is whether you have a good representative sample to start with: is a vendor purporting to solve your problems someone who has the same mix to operating systems? Do the teams have different mixes of skills an experience? Are they defending the same targets in the same context? The less an experiment has in common with your situation, the more likely it is that the study isn’t valid for you.

If you find that you can explain any result equally well, you’re rationalising, not understanding the material.

In summary:

  • Be suspicious of observational studies!
  • Security studies may not transfer to your situation.
  • Try and keep other variables consistent.

Choose your own adventure: password reset


A typical scenario: we forget our credentials for a site. No-one, he notes, ever chooses to go on a password reset adventure. These are provided so they you neither give up and stop using the site, or worse yet, try to talk to an actual human on a help-desk.

So how do these work? Well, the key thing here is evidence: how do you demonstrate you should be able to access the account because you’re the right person. Your email address is probably the most reliable mechanism for achieving this; so you click on the link, ignore the rest of the instructions that may be on the password reset instructions.

Typically you get an email which will warn you that if you didn’t make the request, you can ignore it. Which is not necessarily true if you start seeing a lot of them. Then there’s a link - we spend a lot of time trying to convince people not to click on links, but this is really your only option.

There are no heroics in a password reset journey. It should just work, and people do it because they want to. Sadly it’s not a rewarding thing to work on, either: if it works, no-one notices, and if it doesn’t, you’re in trouble.

We typically model password reset from a person perspective: a legitmate person trying to get access to their own account, and an evil hacker. The problem is that while motivations differ, behaviours can be very similar - for example, if you use a few different email addresses, you might try several of them in a row - which is exactly what an attacker would do when trying an enumeration attack.

Moss thinks we should model instead from the perspective of the reset token: for example, we should only send a token when you can send emails to addresses that a valid and associated with a known email address. We should distinguish between invalid tokens (which we simply ignore), and is not expired (which is a valid account). Finally, we should be clear whether the password was successfully reset (which cleans up the token). And our recovery path should be “start the process again”. It also means it’s OK to allow shorter recovery times, and, finally, means you can invalidate the token on a successful reset.

Moss thinks you should re-use unexpired tokens - if someone keeps requesting a token in the 30 minute window before it expires, just re-send the same token, so people don’t get confused by multiple requests which may have invalidated one another.

Story time: the company that bought the anti-phishing software that visited links. Which expires the password reset token. Which means you can never get back into the site.

Unsafe Defaults: Deploying Kubernetes Safer(ish)


James notes that Qantas managed to lose his luggage, and begs forgiveness for any eau-de-air travel.

So what is this k8s thing, anyway? Please understand that this is a bit like “how to build a race car is 7 minutes”. k8s provides an API, which takes configuration files to apply the policies you’ve defined. k8s is in fact a state machine - it orchestrates moving the system from state to state, taking all the necessary steps to get from state to state. From a security perspective, the API is “the good stuff” that needs to be protected.

A k8s system has a master node and worker nodes; you make a request for an application, and the master instructs the worker nodes to spin up one or more pods, which are a collection of containers. As you scale, you can add worker nodes; scaling beyond human scale is the raison d’être of k8s, so this is a common theme.

There are a lot of recommendations around installing k8s; James is a little suspicious of most of them. James notes that curling scripts and the like from random Medium or github pages are… not great. “As a community we are setting ourselves up for failure.” Please follow the official documentation; even if it’s quite overwhelming to start with, it’s the way to get a handle on what’s going on.

Some Quick Wins

By default the API binds to localhost and the first non-localhost interface. By default the localhost listener on the master node has no security and no authentication. Any shell on the master node will, by default, have full control over your cluster. This is why it’s unwise to mix worker and master functions on the same node.

Furthermore, k8s defaults bindings for anonymous authors; you can disable that with --anonymous-auth=false.

As well as passwords, k8s also uses tokens for service accounts; these are rotated every 24 hours by k8s. There used to be non-expiring tokens, and if you have an old install these may still be hanging around, which may not be a great idea. Everyone gets a token by default, and the token allows pods & containers to interact with the k8s API. This is very useful for containers that need to, for example, discover the k8s topology for a load balancer. Moreover, the token is stored in a directory which is chmod 777 in the container. You probably want to disable those by setting automountServiceAccountToken: false for each namespace. Because if you don’t, anyone who knows the token for a namespace can run amok in the namespace.

A new thing is the podsecuritypolicy which allows you to apply more default controls to containers: preventing containers from running as root, applying selinux controllers, file permissions, and so on. This is only in beta at the moment, and relies on strong RBAC being set up. You really need to be looking at whether people are testing these controls - because if they are, you’ve probably got someone who has breached your cluster.


This provides a highly available key value store (but only if you have three or more nodes). It’s very latency sensitive, which means it’s easy to DOS a k8s cluster by upsetting the environment it relies on. It contains all your secrets, encrypted on disk with… base64. It also listens on the network by default. By default it’s insecure and doesn’t require authentication.

Given that it is typically required only by the k8s APIs you should probably lock it down with TLS, a firewall, and a login.

a novice red teamer’s guide to self help


Jo used to suffer horribly when being asked to speak; it was cripplingly difficult for her. Last year she decided to try to overcome this by entering the DEFCON social engineering CTF. And coming second.

Hacking Yourself

Anxiety is very much like excitement: it causes many of the same physiological responses. Jo recommends anxiety reappraisal; by saying “I am excited” it moves you from feeling anxious to feeling excited, a positive to a negative experience. It doesn’t remove anxiety, but it makes it feel better emotionally.

Mental practise: the rehearsal of a physical skill without movement. Rehearsing in your mind has plenty of evidence to support the idea that it improves real performance.

Enclothed cognition: What we wear can enhance our psychological states, improve our performance, and boost self-esteem. Understanding the symbolic meaning you associate with particular types of garment allows you to dress for success, literally.

Hacking Others

From the Social Engineering Code of Ethics: Leave others feeling better for having met you. Aim to influence people’s neurochemistry by tapping into emotional responses that levering hormones such as:

  • Oxytocin: High empathy pretexts - create trust, emotional bonding, and generosity. Jo offers the example of presenting as a stressed out mother or over-tasked intern.
  • Endorphins: Social laughter. Comedy is good!
  • Dopamine: Let people talk about themselves, feel like they’re right.

Cunningham’s law: The best way to get the right answer on the internet is not to ask a question; it’s to post the wrong answer.

Ego suspension: You need to lower your own wants, needs, and motivations, to place priority on the other person. You need to do this, because otherwise the above influences can work against you, rather than for you.

Defence Against Dark Arts

Social engineering is a huge attack vector - by far the most lucrative. Unfortunately people find it “too hard”, is “telling people to be mean”, or is “pointless because people are stupid”.

  • Audit your publicly available personal information periodically to understand what’s available to be used against you; e.g. make sure your security questions don’t have answers can be Googled.
  • Use multiple forms of communication - if people make an important request on the phone, for example, request a verification on another channel, such as Slack, or IRC, SMS, Skype, etc.
  • Authenticate your IRL communications: Make sure a person is who they claim they are! Ask questions and be inquisitive.

to identity and beyond

Ben Dechrai/@auth0

“Social media is a fantastic tool… for social engineering as well as other things” - social media companies need lots of users to be effective. So all of them tend to encourage you to give e.g. Facebook your email login so they can dig about your email and find identities. Having finished scraping your account, Facebook would spam your friends with a “join Facebook!” message.

This practise - logging into other services with your credentials - was bad in a number of ways, one of which was technical. OAuth was born out of the desire to avoid you handing off your login and password to a third party; instead OAuth lets you delegate certain rights, such as the ability to read a list of contacts, by logging into your email provider. OAuth also facilitates use cases such as Tweetdeck posting to Twitter, or LinkedIn reading your email.

This was then extended with the UserInfo endpoint, which allows authorisation between services, not simply information extraction; this is what allows “login with Facebook”, or “login with Google”; at this point there was no standard, however, for managing this; this evolved into OpenID Connect.

Unfortunately there’s not quick summary for OpenID Connect. The shortest official summary is 7 paragraphs; Ben tries to distil that down:

  • A simple identity layer.
  • Built on OAuth 2.0.
  • Verify identity: make sure it’s the right identity from the right provider.
  • Obtain basic profile information - name, email, etc.
  • Interoperable and RESTy.
  • JSON as a data format.

OpenID uses JSON Web Tokens to pass authentication around between the user, client (consumer) application, and provider. JSON web tokens are opaque to look at; there is a header, a payload, and a signature. The first two blocks are Base64 encoded JSON. The signature provides the assurance of information integrity. These tokens then become a great common standard for passing around identity in complex applications that would, in the past, require handling identity in a bespoke fashion and many complex handoffs. Ben’s argument is that identity, in this context, is a solved problem and you shouldn’t be rolling your own solutions to this, when you can just take advantage of OpenID.

risk management without slowing down


Risk management really isn’t that scary! Mikala started out as a pen tester, but became bored with that, because she spent her time dealing with symptoms, not causes. “Really good risk management is like a great bra: it supports and uplifts.”

Risk = badness * risk; inherant risk is the risk if we do nothing at all to limit risk. Our risk appetite is whether we want to take a risk. Jumping off a bridge is a risk; a bungee cord is a control, which reduces the residual risk. This is something we do every day - we have a team of risk managers in our gut, consulting their spreadsheets and yelling “no, that’s too risky” when we’re about to do something that’s a bad idea.

The problem is that this gut instinct doesn’t scale. We need to be able to work together and share risk. So:

  • Please write things down.
  • If it’s worth doing, it’s worth halfassing. Something is better than nothing, and everything reduces residual risk.

A risk register:

  1. Listen to the little voice in your gut.
  2. Write it down.
  3. Work out what you want to do about it.
  4. Look at it later.

Here’s an example of a risk register:

What might suck? | How much will it suck? | How hard is it to eliminate risk? | How can we reduce the risk? | What should we do?

You need to re-visit and re-evaluate your risks over time as your context changes; you can rarely eliminate all risk, so you you probably need to focus on controls.

Risk assessments: if your security person has to be across every single decision made, they will become a blocker, and no-one will do them all. Writing risk assessments can help filter what needs to be referred.

A risk assessment can be pretty simple: leverage what questions you already use when evaluating risk. The key question is “Does this feel high risk?” as the last question after all the others. We should probably pay attention to that feeling.

incident response drills: how to play games and get good

Kirk Nicholls/@goatindex

Kirk has spent 10 years as a DM, including such gems as a Carly Rae Jepsen themed heist game; he’s also a former Air Force comms nerd who now has a job organising games for a living to help people learn about how to respond to incidents.

So what is an incident response exercise? Well, an incident is something undesireable that happens: in the air force that might end up being downed behind enemy lines, in security in might be a breach. A fire drill is a great example of an incident response exercise: it’s about building preparedness for a real fire. These can be anything from a standard fire drill to one where you simulate smoke and darkness as part of the exercise.

What do we need to practise? Well, we might be good out our jobs, we might have playbooks, but it’s hard to do things when under pressure, both as individuals and as groups. We practise so that we can stay calm and do the right thing under pressure. That’s because stress harms our ability to retrieve information from memory; studies suggest that repeated practise of stress can make it easier to recall under pressure.

Personal experience - if you’re relying on individual expertise (personal experience) is the hardest possible way to do things, it’s rife with bias, and it’s very uneven across groups of people.

Other people’s experience? Still has equity and bias problems, but learning from the knowledge of others allows you to understand and prepare for situations you may never have previous personal experience in. It’s crucial to expand your ability to respond beyond your personal past. “War stories” can be useful, but only if the focus is providing education, not if it’s for self-glorification.

Mentoring and coaching can be another way of sharing information in a more structured way, but it still requires a lot of knowledge and work for mentor and mentee.

Synthetic experiences: these require investment, but the most controlled, and are the most scientific. You’re providing fine-grained and very directed ways to learn. These can be, for example, escape rooms, tabletop scenarios, VR, or other tools. Incident response is a particular version of synthetic experiences.

So how do we do it well?

  • Define what a good outcome looks like, and how you’re going to measure it. Measurement is hard. And your program needs to be cyclical, because you won’t get it right first time.
  • You need to work out the right level of abstraction for the audience. The exec probably don’t need to see bytecode.
    • Selling things is a tool: the CDC got their biggest ever uptake of bugout bags with a “Zombie Outbreak” bag.
  • Know who will benefit.
  • Set out time to actually run regularly. This will most likely require exec support to set that time aside.
  • Feedback is critical.
    • Follow up with people who don’t give feedback to understand what’s not working and why.
  • Scaling difficulty: the easiest way to do this is to play with time. If people are relaxed, give them less time; if they’re too stressed, give them more.

Some recommended books:

  • Crafting the InfoSec Playbook.
  • Intelligence Driven Incident Response.
  • Hoe to Measure Anything.
  • How to Measure Anything in Cybersecurity Risk.
  • The Dungeon Master’s Guide.

an introduction to ghidra


It’s the 25th aniversary of Comic Sans!

Ghidra is a reverse engineering tool - that is, for pulling apart programs that you don’t have the source code for in order to understand them - released by the NSA in the last year; they promise is no backdoors. You can get it on github, because the release was also open source. It’s a Java app that can run on Windows, Linux, and OS X.

The interface is straighforward, but it’s a tool aimed at programmers who can work with assembler or other fairly low-level programming languages; there’s a number of useful plug-ins and tools, including a function call graph that helps you get an overview of the program you’re trying to understand. Plug-ins can be written in Java and Jython. There are powerful search tools, batch programming tools as well as the GUI, and good collaboration tools. And undo.

Compared to professional IDA tools, which typically cost thousands of dollars per license, it’s a bargain, while still being under active development; there’s excellent learning material - training manuals and exercises that teach you how to reverse engineer.

protecting people from social media harassment

Tom Eastman/@tveastman

Tom is talking about (https://secateur.app/)[secateur] today.

  • Twitter’s abuse protections are bad on purpose, because they make money from engagement - and dogpiles and abuse are super engaging.
  • Secateur is designed to help vulnerable people manage attackers by blocking them.
  • One of the challenges writing Secateur is that Tom doesn’t want to make things worse for people, so he needs to consider how attackers could misuse the application.
  • Twitter’s permission model is garbage, unfortunately.
  • The first task was to compartmentalise Tom’s personal twitter life and credentials from the Secateur app credentials.
  • The second is to consider how to limit access the credentials provided by users.
    • Since the application only blocks for short periods - 6 weeks at a time - the creds don’t need to outlive a block.
    • So creds are thrown away after 6 weeks.
  • Moreover it’s open source; that allows it to be audited and run by the individuals.
    • A big part of that is taking “trusting Tom” out of the equation.

face your fearful foes to dodge a dark and dreary phishy fate

Brendan Shaklovitz/@nyanshark

  • The most important tool hackers have is time to do recon.
  • The social media is a great way to work out what people have done and what they use - since your LinkedIn profile shows your tech stack, well, that’s useful.
  • Likewise workplace pictures might show that everyone e.g. uses Macs and Slack.
  • This lets an attacker tailor to your interests - for example, offering you a special pre-release purple slack.

So how do we defend against attackers, who have a lot of time to do recon and design attacks?

  • MFA - especially from a token or similar OTP - makes it much harder to steal useful credentials.
  • Build a strong security culture where every employee feels like they’re a part of the security team.
    • It’s really important to avoid a punishment culture.
    • Don’t think poorly of social engineering targets.
    • Privilege rewards over blame.

security confessions of a small country

_Laura Bell/@ladynerd

“I’m excited to be here. I haven’t given a security talk to a security audience for years.”

The talk is about how we do security in our place. “Warning: homework ahead”. “The security world you live in is here. We are tiny, nestled in between Ireland and the Central African Republic size-wise”. The challenge, Laura notes, is that we listen to advice from big countries - places like the United States. And it’s dangerous when we take advice from those countries - because they’re built to scale up. NIST does not scale down. We have more people working as rafting guides that security folks.

Because we’re an export-driven country, we often have to grapple with these standards - and we are too quick to take advice from these big countries without thinking about how we solve these problems at our scale. And we don’t have “polite offices” which, as Laura points out, is a complete mis-match with where people actually work. In fact, only 20-30% of New Zealanders work in a traditional white collar office. Worrying about Active Directory is not necessarily a useful concern for an apple farmer, or a truck operation.

We’re not, alas, very good at security out of the box. Even our fauna aren’t! We have birds that don’t fly! Frogs that don’t have a tadpole stage! But we somehow keep those fauna alive anyway.

So what are our vulnerabilities? Well, we’re trusting! It’s awesome. But it’s also open to exploitation by people who aren’t so nice. We’re slower to adapt to changing threats.

But on the upside, we improvise. We’re not paralyzed by perfectionism. And we’re prepared to take risks. Those have counterpoints, though: we leave things unfinished. We underestimate complexity. But rather than fighting this, we we need to take advantage of these things. Support standards that measure quality; talk openly about complexity; hold ourselves accountable for completion.

We do need to be better about being selective about who we trust; be prepared for a defensive response; but most importantly of all, developing threat models for our context, not blindly importing them. Take advantage of our small size to be responsive, and work well together.

Laura has launched https://opensecurity.nz, a volunteer group to create a non-commercial space for sharing documents and tools to provide a reference library to help one another. To use our superpowers to protect dumb birds and frogs that shouldn’t exist.

This was a fantastic talk and my head had been full of Thoughts because of it.


I had to leave before the formal closing, unfortunately (if you were wondering which dick was walking through the live stream camera while they were setting up, that was me. I hope you love my fabulous shirt.)