LCA 2016 Day 3

Day three kicked off with a great keynote, of the “unexpected pleasure” variety, nearly as good as my introduction to Biella Coleman back in 2010. From “who is this person” to “OMG HOW DID I NOT KNOW ABOUT THIS PERSON” in an hour.

Keynote - Life is Better with Open Source

Catarina Mota

“Linux paved the way for the community of open source hardware and developers.”

Catarina started out as a filmmaker, working on 16mm film: a completely transparent film, working hands-on with reels of film, splicing and editing; it drove a direct relationship with film and was the first time she understood the way that the relationship with the machine influences the process.

While applying for the NYU film school Catarina literally took a wrong turn and ended up applying for the Interactive Technology Program. She was astonished by what was being built, and most astonished it was being built by ordinary students, not from PARC. They were building devices that seemed liked black magic.

“The fascinating and important thing was that we were mostly not geniuses” - we were encouraged to take risks and try things. Failing was acceptable and encouraged, creating a safe environment for learning. Sharing of knowledge was strongly encouraged, sharing circuit designs, code, ideas.

“When I graduated I thought that was how the world worked.” She was in for a terrible shock.

Fast forward to 2009: European 80s hacker spaces were inspiring the same in the United States. She began to expand and build hacker spaces in Portugal, as well. January saw Catarina working on a homebuilt 3D printer. The tools were becoming available beyond select institutions.

What is happening?

The DIY culture of software was spilling into hardware: Arduino, Raspberry Pi, etc. People began to build things ranging from the serious to the whimsical. At the same time, personal digital fabrication tools became available, of which the RepRap is a key example.

DIY Hardware includes data collection, scientific equipment, satellites, even housing designs built with open source machinery. Catarina’s house, for example, is an earth brick design, designed on-line, created from modular components, and built barn-raising style. The goal is to end up with a library of modular house design components that can be used to improve the accessibility and cost of housing.

Why? Because we can!

There is a fundamental tension emerging in the modern world between passive, consumer-oriented culture (“Watching TV has become a part-time job”) and spending time editing Wikipedia, writing code, and so on. Traditionally, practical barriers to copy things has been so hard, discussing the right top copy hardware has, pace Richard Stallman, been pointless to discuss.

Catarina asked us to watch Full Printed by Neuve Ojos:

3D printers aren’t the Star Trek replicators we want them to become, but they are playing a critical role in encouraging the public to work on these problems.

Why Does It Matter?

Technologies are not neutral. There is a mindset that says since technology is merely a drive to efficiency, it is neutral, but this is not true; technologies are designed and developed based on the desires, resources, and so on, of their developers.

Catarina compares a smartphone ad (which emphasises the passive consumption ideas) to the pitch for Phone Bloks, which emphasises the ability to tinker.

A distributed approach is needed, where users are co-creators, and where devices are meant to be repaired, transformed, adapted, and appropriated.


  • Compliance for buildings? Catarina lives somewhere with no building codes. There is work in progress to have particular modules up to various building codes they can be used more widely.
  • What incentives do manufacturers have to get on board with this movement? Just as businesses have been able to build business models around free software, the same will happen with open hardware; we need to set up examples.

Using Linux Features to Make a Hacker’s Life Harder

Kayne Naughton

Why Bother?

  • Most people are trying to stop things, but it’s too hard.
  • “I want to make it difficult” so the attacker goes elsewhere, just like house and car security.

Threat actors aren’t a force of nature - they’re just people like you and me. If it gets too hard they’ll go away. (It’s worth noting the same notion has come up at a previous Kiwicon.)

Cyber Kill Chain

Lockheed Martin’s “Cyber Kill Chain” is a way of breaking attacks down. Think of it as an OSI model for security.

(Kayne’s slide with humorous cybercybercyber shows Laika BOSS - Open source file-centric malware intrusion detection).


  • Do not overlook the influence of things like LinkedIn or other social media. You can bypass the LinkedIn privacy controls by pretending to be a recruiter or using Google, both of whom get a free pass.

  • Consider using fake people on LinkedIn and see who tries to contact them, for example.

  • Keep an eye on e.g. StackExchange or other forums to make sure people aren’t leaking information about your internals.

  • scapy: you can forge your ICMP responses(or not).

  • Shodan: showdan is a port searching tool. It’s a go-to cool for researching targets. If you see traffic coming from it, feel free to spoof responses by closing and opening ports based on who’s looking.

  • iptables –recent can be misused horribly though; it can be easy to block things you don’t mean to.


  • Python interpreter that lies about indentation errors. Frustration for anyone trying to deploy their script! This replaces the standard Python interpreter on your system and always throws whitespace-related errors, leaving your attacker endlessly editing a script until they figure it out.
  • You can play similar stupid tricks with other interpreters or compilers you leave lying around on a system.


  • Rate limiting. This is the most annoying thing in the world for an attacker trying to exfiltrate data.
  • iptables -a input -m statistics –mode random –probability 0.04 -J drop is a wonderful way of driving attackers up the wall.
  • slowtalk.
  • If it upsets pen testers, it will upset problem children.
  • It’s petty, but it’s cool.


This isn’t necessarily a technical problem - it’s much easier to convince people to break their own system by clicking on a link than to run a more sophisticated attack.

  • SQL injection is still a favourite, so don’t fall victim to it.
  • But hey, if you notice it, why squash it? Why not send them entertainingly wrong data or to a different server. Hours wasted on junk is hours not trying to find actual weaknesses.
  • Metaterpreter (port 4444) for metasploit. Intercept this and fuck with it. Seeing it on your network is a Bad Sign, but again if you tinker with the output you can cause an attacker to waste time or give up.


  • Messing with mounts and permissions. Use unionfs and inotify to create overlay a directory; the hacker writes the exploit to the failsystem, then inotify moves the overlay directory, then your attacker is horribly confused. This is the flip side of the sort of techniques an attacker might use to hide a their tools.
  • Containers and cgroups can drop people into frustratingly oubilettes. Idempotent containers should be read-only so you can’t install anything.
  • Similarly a blue-green deployment model will churn installed attacks out of the environment within hours, leaving them having wasted their time.

Command and Control

If they’ve got their tools installed you’re well screwed at this point. If you’re trying to block this it’s really hard.

  • VNC is still horribly exposed all over the world, often with common credentials.
  • SSH - if you see it entering or leaving in places you don’t expect, that’s suspicious. You could MITM an attacker’s SSH streams on their first connect to a new system and have fun.

Action on Objectives

Note that what the other party wants to achieve doesn’t line up with what you, as a defender, think is important. Think about other people’s motivations.

  • Looked for 419 scams by looking for geolocation data in the pictures used by the 419 spam can be entertaining.
  • Consider fake people in your e.g. procurement team. Send suspicious requests to the fake payroll person can interact with them. “Oh sure, we’ll pay that bill, send me your bank details.”
  • Don’t reveal mechanisms. Give them no feedback as to why things have or haven’t moved. Don’t say “your card is stolen, no camera for you”, send them a fake receipt. Again, you’re wating their time and effort.
  • SQLMAP has a wizard! It’s great for pulling data out of DBs. But it has the agent “SQLMAP”, so send fake results if you see it making requests.

Six Ds

The principle techniques to counter attackers.


e.g. SIEM. This is great because the attacker doesn’t know you’ve found them. Entirely passive at this point.


Checkpoints - it’s a popular mechanism because it’s where the money can be made. It’s binary and often easy to game/bypass.

Don’t disclose your rules, make attackers work to understand how to bypass them.


e.g. stopping outbound traffic with sensitive information to interrupt exfiltration attempts. The slowdowns and packet drops above, for example, are about making this hard or impossible.


How do you make information less valuable to the attacker.

  • Don’t log all the things. Don’t leave information laying around if you don’t actually need it. This ties in with Bruce Schneier’s idea that data is a toxic asset.
  • Segment your data where you can.
  • Tune your password algorithms to the production hardware - use as many rounds as you can get away with.


The guts of the talk above - creating fake problems that produce fake data will make you a target that isn’t worth bothering with.


We don’t really have a ballistic capability in most IT Security teams, but it’s probably just as well we don’t.

What do you gain by shutting down an opponent via the police, for example. If you spend two weeks knocking down someone’s botnet and it takes them a day to set up a new one, is that a win?

From my perspective: my experience is that attackers looking for economic gain will actually steer clear of targets who are more likely to try and get them arrested.

Action Matrix

This is a grid mapping the Phase to the Six Ds.

Counter Intelligence

  • Track e.g. ssh keys and trace them back to github. You then know who is fucking with you. Call their boss!
  • Send them cookies!


  • How effective are WAFs? Great against 80-90% of people out there. Pretty cool, unless you sell oil by the barrel.
  • How to you handle support problems that result from denying users information about what went wrong? Catch the error message sent back to the user, keep the real message private.
  • What’s your feeling on counter-attack? If you assume the person at the other end isn’t another victim, you are probably making a very grave mistake. You can end up in court. Just don’t.
  • Is there anyone who maintains lists of IP blocklists that are sane and narrow? Not really.

Open Source Formal Methods: Are We There Yet?

  • Why formal methods? “Because I care about bugs.” Working with an RTOS involved in medical or transport, bugs can be pretty serious.
  • Even mainstream commercial projects may end up becoming dependencies for critical code.

So what can be done? Software analysis!

Software Reliability

  • Standardization and certification: ISO/IEC certs, regulatory bodies, and so on.
  • Fundamentally these are best practices, so many of these standards are encoded best practice.
  • Unfortunately many of them are also manual, and subject to human error.
  • Others require automation.
  • The three techniques in the talk - sit on an “iron triangle” of tradeoffs.

Static Analysis

Fundamentally it’s an automated code reviewer. It doesn’t compile or execute your code; but the analyzer needs to perform many of the same jobs the compiler does (compile to an AST, for example).

There are quite a large number of FOSS tools in this space: BLAST, cppcheck, eclipse, Frama-C, the LLVM/Cland plugins, Sparse (kernel oriented, kernel-specific code patters).

eChronos uses Splint, since it’s a good fit for their needs.

  • Checks the naming conventions.
  • Language model and rule matching: type checking, function interfaces, data handling.
  • Similar to compiler setup.
  • You want all warnings showing as errors…
  • …excluding known platform work-arounds like inline assembler.

The Good:

  • Improved auto-generated code and readability.
  • Found errors (e.g. incorrect character conversion, wrong types returned).

The Bad:

  • Unused code that isn’t used in the OS layer, but applications need. This was a common problem, but can be worked around with annotations.
  • Would complain about type checking that, if changed, would arguably result in less readable code. Do you change idiomatic code to keep your analysis tool happy?

The Ugly:

  • Splint was failing to notice properly defined macros and declared variables.
  • Unmaintained.


  • Valgrind? Fixes memory problems that don’t exist in eChronos.
  • Was it a good use of time? Yes, absolutely.
  • It’s been 40 since Ada tried to do this; should we be trying to make C/C++ work better, or should we be moving to new languages or extend existing ones? That would be ideal, but with embedded hardware we often can’t use better toolchains.

Model Checking

  • Model: Your application (in the modelling language).
  • Specification: Your requirements (in modelling language).
  • Model checker: creates a logical expression which represents all possible model states, evaluates for truth.

For a long time, Model Checking 1.0 involved e.g. discovering that you’re supposed to convert everything into LOTOS in an error-free manner and then use it, which is “not my cup of tea.”

Research has come a way, through: consider the C Bounded Model Checker (CBMC).

  • It understands C natively and parses it, no serparate model required.
  • Supports a standard libc/OS primitives, so it can tell if your code is calling problematic libs.
  • Understands concurrency such as pthreads.

Integration for eChronos was straightforward:

  • Runs against source.
  • Reuse existing assertions.
  • Stub out platform-specific assembler.
  • Cope with the different understanding of the concurrency models: eChronos task switches on a uniprocessor, but CBMC’s model of concurrency is to work as though it’s on two truly parallel processors with no sync/locking problems.
  • This required teaching CBMC a new task-switching model that mimicked the eChronos execution schedule.
  • The authors fixed one bug they uncovered in a day.

The results:

  • The scalability is not ideal - 300 lines of code and a trivial applications will take 3 minutes to run 50 iterations of the running application.
  • There’s more work needed: you probably can’t feed your whole application and get a good result.
  • You’ll probably need to slice and dice critical sections that you care up.

Are We There Yet?

  • Static analysis is, and there’s no good reason not to use it.
  • Model checking has made impressive progress, but there’s no out-of-the-box experience.

The Future Belongs to Unikernels

Andrew Stuart

Starting with the title: it was a click-baity title and summary (“Linux will soon no longer be used in Internet facing production systems”) which is a bit of a stretch.

Andrew’s goal is to convince us that unikernels are the future of publishing applications to the Internet. Andrew has been working with the rumpkernel project since last year. It’s still a work in progress and can be a bit hairy.

A “specialised, single address space machine image” - kinda like a DOS application. A key concept is you pull in only the OS components/API bits you need, rather than a full libc/userland/filesystem/etc etc. There’s nothing that can be logged into.

They’ve become interesting and significant in part because of the Xen and KVM make it possible to run guests with a very minimal set of device drivers, making it easier to develop new kernels; they’re small and simple (a few megabytes) and fast to boot (milliseconds).

Many of the unikernels are language-specific, which can constrain your development/deployment options.

  • MirageOS
  • Rump
  • HalVM
  • Ling
  • and friends.

Flavours of Unikernel


  • One of the most mature and with a high profile.
  • Developed in Europe, written in OCaml.
  • Applications need to be written in OCaml.
  • Docker have bought the company behind MirageOS.


  • Haskell.
  • Backed by Galois.

Microsoft Drawbridge

  • “Picoprocesses” and a lightened Windows.
  • Still a research projects.
  • Working towards a unikernel.


  • Erlang on Xen.
  • Cloudozer systems.

Peeve: at this point there was a diversion into slagging off “obscure languages”. If you don’t know Erlang runs large chunks of the world’s telephony languages you’re just embarrasing yourself.


  • A JavaScript unikernel deployed on v8.


  • Stripped down version of FreeBSD.
  • No users.
  • Allows JVM or Posix applications.
  • Cloudius systems.

rump kernels

  • Portable, ready to interation kernel-quality OS drivers running on a documented interface.
  • There are examples of rumpkernel implementations of e.g. nginx, mysql, and so on.
  • You can often run code with little to no modification.
  • You can’t fork() under a unikernel, so you need to use threading applications for concurrency.


For the most part, most Linux machines have user logins available; that is, there’s a functioning userland. Andrew thinks we will evetually get to the point where logging from the Internet will just go away.

“There’s no door in a unikernel.”

This moves even further along the route of immutable infrastructure - stateless machines that can’t be modified.

At this point the talk was derailed by people profoundly beardhurt at the idea that there might be something better for a given job that a full Linux/POSIX stack and taking potshots at the presenter.

We layer a lot of security around trying to prevent access to the user layer, which becomes unnecessary in the unikernel context1. It significantly reduces the attack surface.

MirageOS have a bitcoin pinata on the Interwebs. It’s a unikernel which, if you crack, you’ll get a key to a payload of bitcoins. It’s gone a year without being owned.


  • Observation: many applications aren’t remotely manageable without a login. My observation: those applications are bad and the people who write them need to get with the program.
  • Does it improve density? Not hugely - perhaps 10% up, 10% down.

Challenges When Scaling: Advetures in Swift’s Sharding

Matthew Oliver

This is the Swift from OpenStack, not the Swift from Apple.

Swift overview

  • Object storage system.
  • Eventually consistent.
  • Hugely scalable.
  • High concurrency.
  • Can use commodity hardware.
  • Flexible.
  • Multi-region.
  • Customisable - plug-ins allow you to examine and modify in-flight data, by e.g. adding metadata.
  • Storage policies.

Swift has 3 key resources: the account, container, and object. An account can have many containers and a container can have many objects. Hashes are derived from these parameters with the intent of allowing sharding, but at the account/container level. When people move from Amazon S3, though, they tend to keep a structure of a handful of containers with boatloads of objects in them.

The first effort at solving this is to treat accounts and containers as objects. Under the hood, these are implemented as SQLite databases, and scattered throughout the cluster; this can cause problems once they become large: replication and consistency merges eventually cripple performance.

And that’s the problem to solve.

  • Partition the accounts and containers - i.e. the SQLite DBs.
  • Using rings to map partitions to replicas to devices - this is handy for forcing large partitions onto SSDs (for example).

POC 1: Object Hashing

Uses the Swift sharing algorithm to shard the container, and place the metadata for the shard into a hidden account. The account only ever sees a single container, but under the hood it’s broken into a set of containers under the hood.

  • Code already exists.
  • The sharding proxy can determine where the container shards need to go, so there’s no additional requests to find what goes where.
  • A back end daemon manages this.

Unfortunately, this approach can result in the shards being reassembled in such a way the ordering returned (from reassembling the shards) can change as the sharding changes (unsharded to sharded, the nuber of shards).

In a worst case scenario the actual object data can be sent back out of order, and as such it appears the data is corrupt.

POC 2: Distributed Prefix Trees

Using prefix trees allows for infinite levels of sharing; it is adaptive to usage patterns, even as it changes. We don’t need to shard the SQLite DB, just another table inside the DB.

This is managed by the container sharder, which lives on every container node, and manages the sharding process.

Unfortunately, large containers blow out the search times to find where things are, large cluster become overwhelmed by the increase in request numbers, and HEAD requests blow out

POC 3: Pivot points (binary tree)

Very similar to 2, but a simplified binary tree layout. Much, much faster and smaller.

There are some TODOs: shrinking doesn’t work, for example.


  • Keep it simple.
  • Make use of the community - there are smart people with great test cases.
  • Stay relevant.

  1. I disagree a bit here, mostly because I think the focus on winning the userland is actually a pretty ridiculous part of the infosec world; if you win the app it can be complete success from the point of view of the attacker.