This year we have a not-Kiwicon; many of the backroom Crüe are Kiwicon-ers, but with a fresh injection of folks give us all a change of pace. Same awesome content, same friendly feel, but cuter! (And still a fine dose of metal grinding out the Michael Fowler sound system as we get seated).
Nevertheless we have metl opening the proceedings, only to be chased off by bogan and replaced by electronic music, sparklers, and sput doing the talkin’ and welcoming. This year we don’t have a traditional CTF, but there is a hack-and-learn experience, and a physical security exercise. Tonight’s movie is The Matrix.
Hey Look Ma, We Made It
Runa Sandvik/@runasand
“Everyone has a different path to being here today”, just because you didn’t start by writing exploits doesn’t mean you aren’t a part of the community. Until she was 15 Runa wanted to be a lawyer - and then her mother brought home a computer, and she decided it was a lot more fun to work out how things worked, and how to pull it apart and do things you ought not to. She sees a lot of overlap between this and investigative reporting: it’s about finding sources, about understanding what’s really going on, and often working against adversaries who don’t want you to learn those things.
One of the things Runa finds fascinating are the different paths people have taken to be here; she initially fell in with a group who were very aggressive, offputting, and caustic towards women who came into their orbit - and while those attitudes fly less today, it was more normal at the time. But it’s also an interesting comparison with the kind of paths her friends followed to end up in medicine or teaching or law - which were far more deliberate. Many of us are here by accident.
Because of that accidental path, Runa got used to hearing her parents lament that she needed to “get a proper job” instead of hanging out at hacker and gaming cons, or spending “too much time” with computers. It almost convinced Runa - she started thinking about getting the degree, have kids, a house, a picket fence. But she’s mostly over that - although it sometimes comes back to haunt her. Following the interesting path, though, has lead to a better life.
“Find out what the one thing you want to do is, and make it work,” Runa says. She started out working on a Google Summer of Code project, and hung around - and eventually turned it into a job; she started working on a project to educate journalists in how to use the Tor browser properly; that quickly escalated. Because there’s a whole world of infosec that goes along with this, that journalists were mostly uneducated it. So Runa ended up pitching to the industry that this was a problem that needed to be fixed, and was able to turn this into a job. There was a need, and was Runa was able to turn it into a living. Another example is car hacking: going from 10 years ago, demoing breaking into a Jeep, we now have a mainstream sub-specialisation in automotive security.
Runa ended up talking to the CSIO of the New York Times about the challenges of working with the newsroom, which ultimately turned into a job: that might be training. That might be phishing - which was how she introduced herself to the journos. She would send out monthly phishes and report on the results, including a competition to see which desk (team of reporters) were doing best and worst at the phishes. And that gave a lot more, positive engagement between the newsroom and security team, where there had been none before.
Runa encourages us to do the things we do care about, and not to feel alienated from “security” because we don’t do a particular sub-set of the field, and moves onto talking about how the newsroom is secured.
- Different desks and bureaus will have different threats and concerns.
- For example, the building may not be under the NYT control.
- In China all assistants have to be Chinese citizens.
- Customers often need to click on links and open attachments.
- Traditional advice, like “don’t click on links” is worthless.
- Education one what’s good and bad becomes critical.
- Sometimes the only solution are extreme measures like air-gapped systems.
- The newsroom is deadline-driven and product-focused.
- You can’t stop page 1 hitting the presses.
- You can’t stop people getting the next Snowden dump.
- The solutions we create have to be usable and reliable.
- If your solution doesn’t work for the journalists, they won’t use it.
- If a 20 second now takes 20 minute, it won’t be used.
- Need to raise awareness, build relationships, build trust.
- Ongoing trust of journalists for infosec, and the sources for journalists is critical.
- There’s a tension between journalists shielding sources - even from their own infosec team - and the infosec team needing to know enough about the source to model the threats the journalist and source will be facing.
So how do you manage confidential news tips? And how do you make that secure and repeatable? Because traditionally that was one-on-one conversations between journalists and sources. That doesn’t scale, and it doesn’t work if you don’t already know as a journalist, which lead into the NYT building a system which allows for potential sources to communicate with the Times via Signal, WhatsApp, SecureDrop, and so on.
“Journalists are mission-driven and it’s easy for us to put the work first. I urge everyone to remember they only have one life.” - @mjenkins
Runa disagrees with the idea of work-life balance. “Do want you want and do as much of it as you can.” One person clapped that.
In an industry where I personally know one person who has committed suicide in the past year, and multiple people who have needed to quit or take months out of work for stress-related illness, this is a shitty and irresponsible message to be running on stage, and it disappoints me to hear it here. If infosec has a problem with people not working themselves hard enough, it’s tiny compared to the vastly greater number who need to find a better and healthier way to lead their life.
Automotive Control Systems Security: Where am I going and why am I in this handbasket?
Peter Gutman
Automotive security has a lot of alarming headlines that boil down to WE’RE ALL GOING TO DIE. The problem is that our cars are now full of computers and microcontrollers. All too many of which are connected to a entertainment unit. Which ruins high-power Bluetooth which can be hacked from five cars away. But it get’s worse! We’ve added MirrorLink. And remote control via the phone! And most of these protocols are “security holes with a wire format”.
The head unit hooking into the CAN bus is typically the attack vector. Many of them run dodgy OSes like WinCE, Android, or similar.
AUTOSAR, which car makers got together to try and create a reliable, secure architecture for car control systems. And the main goal is dependability - because when you’re driving down the road at speed you want something that you can rely on. One key aspect of a “dependable” system is that it needs to cope with many faults, without the system failing catastrophically. Faults create errors, which are supposed to be mitigated. There are layers of mitigation - sanity checking sensors to validate them, having multiple redundant sources, signal quality metrics, watchdogs to kill hung software, heartbeats to check that critical processes are running. Quora allow systems to check on themselves. Execution sequence monitoring.
Reliability trumps everything else: fail-safe systems like “limp-home mode” which will disable more and more capabilities of the car, while still allowing it to run in a safe sense.
- Systems must be fault tolerant.
- Accept that sometimes the worst thing you can do on a fault is to halt. Sometimes you need to keep going, even in degraded mode.
- Contrast that with crypto, where there’s zero fault-tolerance: crypto world fails on any fault.
- Which means automotive security is running at cross purpose.
AUTOSAR has a lot of constraints: for example, all code runs in NVRAM, not RAM. You can literally reboot going around a racetrack (Peter gives the example of a ‘96 Toyota being rebooted at 180 km/h doing just that, with no ill effect to the driving experience). Everything is done at build time, with no dynamism in the system. And everything is analysed and formally verified. Moreover, in Europe there’s WVA (Whole Vehicle Approval). In the US, there’s self-certification, which is why Tesla just flick out self-certified OTA updates.
Many of the processors used for these purposes are older, slower processors you may not have hears of, like the Bosch EDC17 ECU, which has a 200 MHz clock and a tiny amount of memory by modern standards - but it has multiple lockstep cores doing checking of their results and integrity. But these limitations aren’t really a problem, because the AUTOSAR environment is tight and tiny: some routines have been tuned down to needing no bytes of RAM and a few bytes of NVRAM. The OS is miniscule by modern standards. And it needs to be around in 10 - 20 years for replacement.
How does all this affect security?
- SoCs may have AES, so that’s pretty much what you’re relying on.
- You have lots of random sources you’re plugged into, so that’s good.
- Unfortunately a lot of the SoC interfaces are not great, and are vendor API is pretty poor.
- Wouldn’t it be great if there was a better standard?
- PKCS#11 isn’t great.
- So the AUTOSAR Crypto standard exists.
- Peter is scathing: it’s huge, too complex, not particularly secure, and won’t even fit in a typical AUTOSAR procesor.
So how did they end up with something completely unsuitable? Well, they were smart enough to acknowledge they didn’t know crypto well, so they asked experts in crypto. unfortunately, the experts in crypto were academics who weren’t any good at thinking about a practical solution based on the actual risks. Peter notes that the most problematic part of a car is the head unit: everything else requires physical access to the car. The head unit is the only part of a modern car that allows remote access.
The head unit only really needs to be able to implement read-only display data from the core systems, and some functions like reversing cameras or mirror adjustments. So using a one-way diode between the CANBUS and the head unit pretty much cuts off any remote attack except compromising the unit. Do better threat modelling! Don’t let crypto geeks play without adult supervision!
As is Peter’s tradition, this talk is delivered lightning-fast and with more slides than the average textbook has pages; he promises to publish the slides later.
Endpoint Protection: Don’t Believe Everything You Read
Lydia/@pink_tangent
True story: on her first day at the job at Honda America they were hit by ransomware - and then a few months later, they were hit again. The second time around it was five days to fix: five days when many people were unable to work and an external company was pulled in to help with recovery. This is not a trivial problem: hospitals have been hit, for example. So why does this keep happening? Honda were taking sensible precautions like deploying endpoint protection and so on.
Lydia has worked as a dev and sysadmin, and then moved into security; she wants to talk today about endpoint protection. Some vendors purport 100% protection - so clearly can’t be trusted. Another vendor claimed that you only needed protection on your fileserver, not on the PCs. Patient zero is still a problem. Lydia’s boss, though, had listened to the hype because it’s so pervasive. Lydia started researching endpoint protection to understand how well it really works.
She started with 9 vendors and quick-culled down to 6. That’s because some refused to participate in the PoC Lydia was building. This was new ground when she started this 2 years ago, so she had to start with fundamental scope and requirements. What she found was that people over-focused on the cool tech, and under-focused on the problem you’re trying to solve: are you trying to detect? Prevent? Even today, Lydia is disappointed by the analysis around this area.
She broke things down into several months of requirements writing, more months of functional testing, and more months of testing. Her work is all on github for the community to take advantage of.
Some vendors got pissy about the methodology, but none of them had any evidence to back up their unhappiness.
If you want to take advantage of Lydia’s research, the best way is to tailor the test environment to your OS builds, applications, and so on. One important note from her experience is that malware can behave differently on different virtualisation platforms: there can be significantly different results between infected guests on AWS or Virtual Box. And you will need to use virtual images, because you’ll be trashing them again and again.
You need to be rigorous about how you compare different endpoint tools: making sure you’re comparing scenarios like-for-like. You also need to make sure that your tests reflect the reality of different user profiles and so on.
Test Case 1: Static/Pre-Execution
Here you’re testing dormant files, ones which aren’t actually activated. This isn’t relevant to everyone, but you’re trying to see if an un-activated file is noticed. You should also try different copy paths - some endpoints could only do static analysis on downloads, but not on files being copied from USB mass storage devices. The key outcome is whether the agents can quarantine malware, and how much. Agents ranged from (for example) some agents a high water mark of 75-80%, while McAfee could only see 50%. Rates dropped even further with no Internet access, since some agents clearly need to phone home to be effective.
Test Case 2: Dynamic/Execution
Here Lydia is checking to see if running the exe actually triggers the rules; for the first round she notes it’s important to disable the static checks, so that pre-filtering doesn’t skew the results. Lydia noted that one endpoint didn’t notice malware launched from PowerShell at all - although she notes that SentinalOne, the vendor in question, fixed the problem immediately and have consistently been the best vendor she’s dealt with.
For the second round she let the vendors run everything, including the static analysis, to let the vendors “strut their stuff”.
The bitter reality is that from Lydia’s samples - tens of thousands of examples - no vendors were able to catch everything, whatever they claim. Moreover, the claims that machine learning would mean what works today, but not in a few weeks. There was no evidence of this; in fact, re-running the tests over a period of months lead to the same outcome.
But this testing does have its limitations: running big batches of malware makes it hard to understand which malware actually succeeded.
Test Case 3: Mitre ATT&CK
This disables the protection, to understand how well the tools can detect and report even in the absence of a rule to prevent execution. Lydia loves Mitre, by the way: the framework has been a huge help for her to structure her testing efforts. For most of these attacks, they failed. But there were still successes, in the sense that she was able to break in and run malware without any detection.
She notes this stuff is really hard - she used the Red Canary Scripts as part of her toolbox, but testing this is very bespoke.
Test Case 4: Business Compatibility
How well does this work with your day-to-day business processes. If, for example, you deploy your own packages, does your endpoint protection mis-identify it as malware? Does it support SSO? MFA? Cost of log storage and retention? Does it have good backward compatibility for your legacy stacks? Are there decent APIs? These all have a huge impact on the real-world efficacy of the tooling, and whether your users will end up cursing the hell out of it (and you).
Summary
- There is no silver bullet, no matter what vendors tell you.
- You need good hygiene.
- Don’t be bullied by your vendors. She even got a C&D letter and legal threats.
- No test is too basic - make it good for your org.
- Assume nothing.
- Trust but verify.
This was a fantastic talk, and my highlight for the day. Lydia’s work is fantastic and re-useable; it’s a happy co-incidence that I listened to a talk at purplecon yesterday which lamented the lack of good science in understanding the effectiveness of infosec tooling, and here’s a fantastic example of someone doing it the right way.
The Mechanics of Being Good to Each Other
Courtney Eckhardt/@hashoctothorpe
Courtney notes that the talk is based on American English, which means it’s not localised to a New Zealand context.
Who doesn’t like a good disaster (where no-one died)? Courtney starts out discussing the 1990 failure of the Lacey V Murrow floating bridge in Seattle; while they were water-blasting the bridge in order to clean it, they would store the contaminated water in the hollow pontoon structure of the bridge until it could be properly disposed of. There was good science behind this! They carefully worked out how much the pontoons could safely hold and for how long before it needed to be disposed of.
Unfortunately, there was a storm. So it sunk. $69 million dollars of damage (“Nice”).
“Things are more likely to break when you fuck with them.”
Courtney showed a video describing the official explanation for the sinking - and immediately explained why this wasn’t accurate. And then explained that the paper which gave the more accurate version was insufficient, because it didn’t ask people key questions on why they fell behind in pumping out the water on schedule (which contributed to the failure).
“No-one does things they think will blow up the world (mostly).”
People do things for good reasons. So why do we still get failures? Why do good intentions turn into bad outcomes?
Facilitating Post-Mortems: A Crash Course
As a facilitator, you need to keep things blame-free, focused (but I’m not here to tell you not to swear. Don’t call people on your team names. Don’t call the things we build nasty names, either.).
Miller’s Law: In order to understand what another person is saying you must assume it is true and try to imagine what it could be true of. Courtney explains that this is how you try and understand, when someone says something that sounds completely bizarre and untrue. But if you’re running an incident
You: “you” sounds accusatory. It puts people on the defensive! Why? Why strongly implies blame.
Other words not to use: always, never, every time, should, just, only. e.g. “why didn’t you just fix it the last time this happened?”
Better things to say: how, what, what if, could we, what do you think about, what would you have wanted to know? These questions can be difficult because we often crave certain, precise answers. But we need to deal with uncertainty. And we live in a complex world, often with multiple causes and no root cause.
HUMAN ERROR IS NOT A ROOT CAUSE. It’s the start of the investigation. If a person made a mistake - why? What lead to that mistake?
“Try harder” is not a remediation. The human you have today is the human you have to plan to have tomorrow.
Enjoy Your Meeting
Stay on time and on topic. Stick to your agenda. Stay on track - before you share, ask yourself whether this needs to be discussed by the group right now. Practise interrupting, but only for a purpose, not “just because”. If you have trouble with that, visualise how much it annoys you when you’re
Who’s talking? Who isn’t? How do your draw silent people into the conversation when that have something to say. Create spaces for people to speak: ask if anyone wants to contribute. Wait. Wait longer!
Let’s talk about humour. Or maybe we could not. “You may have heard that comedy is tragedy plus time. A retrospective is not enough time.” Anything that makes people uncomfortable or feel bad is going to inhibit the success of the meeting. You could make positive jokes. “Without jokes how do I lighten the mood?” try being kinds and positive.
If you mess up: apologise and move on. Wallowing is self-blame and bad for you and uncomfortable for everyone else. If other people mess up, stop them, and move on. Remind people we’re avoid blame, help rephrase, and move on. Bad jokes? “Please don’t make jokes like that here”. And move on. It’s not a teachable moment.
A Personal Note
In operations - and security - it’s easy to end up feeling like a lone hero, like you’re the only person who cares, that you’re against the world. It can be hard to reach out and make friends and allies outside that in-group, but it’s incredibly valuable and important. Courtney encourages to do so!
Hunting for Gargoyles
Aliz Hammond/@AlizTheHax0r
This a bit of a last-minute fill-in for another speaker who couldn’t make it.
So what is Gargoyle? That requires a bit of a walk down memory lane. Well, back in the day (2000), malware was pretty predictable; you had malware on disk, and it was pretty predictable for AV scanners to pick up. From 2010 on, malware started moving into memory, leaving no disk footprint. Tools had to follow - although in order to avoid cripple performance, you’d optimise: you might skip memory regions based on their exec and writeable status.
Gargoyle is the latest evolution: moving to malware hiding in non-executable memory. This bypasses many scanners by making use of the optimisations those scanners need. This is quite feat, because it has to bypass the NX protection from the processor. It does this by hiding in the NX memory for extended periods, the pops up for a short period - say 100 ms every 5 seconds - in executable memory and runs. Since the scanner is unlikely to pick up on such a small window, it effectively evades detection.
It does this by abusing the Windows system timer. But of course you can’t call NX memory directly, so there’s a ROP chain that simply changes the memory permission, runs it, then changes it back to NX.
So how do you detect this? Well, the first sad trombone is that there’s no stable, documented way to enumerate all Windows timers. It turns out though, that the Volatility script has done a lot of the work for you. Once you dump the timers can then start inspecting them to look for suspicious characteristics. You would, for example, expect legit code to execute, rather than a typical ROP pattern.
Aliz got that far, and then decided that the solution at this point was too manual; so he wanted to add enough automation to make the hunt easier, but not so much that it becomes a misleading “magic box” that claims to be “stopping 358 attacks in the last 5 minutes”. He wanted the automation to be pretty objective: rather than claiming to have “found a hack”, he wants to be able to identify suspicious behaviour automatically, and involve a human once it’s serious enough.
He used the Unicorn Engine framework that allows you to point it at a block of memory and let you see what it will do. Aliz loves it a lot. So when a timer looks odd, the memory block it’s pointed to can be run through the Unicorn Engine to see if it’s doing the things you’d expect a Gargoyle attack to do, and pull a human in.
Aliz has submitted this to as a Volatility plugin, and written it up.
Black Swans: How to Prepare for the Unexpected
Wade Winright/@vashta_nerdrada
Threat modelling:
- Challenge Assumptions: every security decision can become an attack vector if it’s not quite right.
- Attackers have budgets and bosses: most hackers are no longer “curious pokers”, they’re criminals, nation state actors, people who want to be famous, what have you. They have the same constraints of time, money, and rules that you do.,
- Expect the Unexpected (Black Swans).
Black swans are, as Australians and New Zealanders know, rather common. But nevertheless, let’s run with Taleb’s definition.
- An outlier.
- An extreme effect.
- Only explainable after the fact.
Snowden: A black swan is only a black swan to those without knowledge of its existence.
There have been a lot of talks, Wade says, that have boiled down to “work out what your Black Swans are and work backwards to mitigate them.” But if a black swan is truly only understood in retrospect, this is impossible. So how do we threat model for black swans? Well, Wade suggest that you should start that, as you add complexity, you add chances that things will go wrong. So that may not be a black swan, but it’s a good working assumption.
So invest in your safe bets - spend say 85% of your effort on the basics: MFA, patching, asset and user discovery, useful instrumentation, paying down tech debt (which is also security debt). The other 15% can go into crazy town: red teams, pen testing, supply chain validation, targeted instrumentation, resilience engineering. But be aware that this is a continuous process; don’t become lulled into a false sense of security because these jobs are never “done”.
Meaningful instrumentation doesn’t mean burying people with logs, it means alerting on anomalous behaviour: priv esc, security testin in the pipeline, secret detection in source control, direct feedback to engineers.
Linux Capabilties:
- Fine grained permission checking model.
- Per thread permissions can provide unpriv processes the ability to access selected priv functions.
- All containers at Heroku run without priv.
- Unprived containers.
The practical effect of this is that many high-rated CVEs have been nothing-burgers at Heroku, because running code can’t take advantage of them.
APIC Fail
Oliver Matilda
So who is this APIC? Well, Cisco’s SDN is called ACI (Application Centric Infrastructure). Oliver’s company has been doing some research on this tooling; there’s not a lot been done to date. The APIC is the controller for the ACI, and plugs into a leaf switch. It’s a Xeon running a Cisco bake of CentOS 7. It runs about 500 process, most of which (About 480) as root. The switching infrastructure itself runs Wind River Linux on Xeon processors, running 300 processes, all but 2 as root.
So what vulnerabilities did they find?
Fail #1:
The attacker needs to be plugged into a Cisco port.
- The Nexus switches have 2 SSH daemons running: one listening on port 22, and one listening on port 1026, but only on an IPv6 address. It auths with public keys, and the keys are the same on every single Cisco switch, baked in at the factory. Rather than giving you a shell, however, it gives you the Cisco
runcmd
wrapper, which limits the commands which can be executed - includingvsh
, a Cisco legacy shell. While that’s limited, it will execute commands from a file. And since you can write to files whichvsh
will execute, you can write arbitary files as the local user. That’s the first CVE. - From here, there’s a root cron job which looks for a file in /tmp and, if it exists, it will run the contents of the file. That’s the second CVE.
- Chaining the vulns together you upload a root reverse shell to the server. That’s the third CVE with a score of 9.8.
Fail #2:
The normal setup might be an Infrastructure VLAN for management, and VLANs and VXLANs for isolation. It would be neat to get onto the management VLAN, right? Well, by analysing the traffic that occurs when an APIC is plugged into a network, Oliver was able to determine that the network trusts that if you spoof that traffic, you can masquerade as an APIC. That’s because there’s no access control beyond knowing the LLDP broadcasts to spoof. Which the network broadcasts to everyone plugged in. This was the next CVE. Not only do you become a fake APIC, you are attached to the management network with access to the management endpoints.
Fail #3:
This was found by Oliver’s colleague Frank Block. He wanted to see if he could find some memory-related vulnerabilities, so he dived into the LLDP, a layer 2 protocol, with defined types and subtypes. Unfortunately it turns out that LLDP parsing runs as root on all Nexus switches. Unfortunately NX and PIE are running on the systems, which makes it a lot harder to exploit the buffer overflow - it would be a challenge to create a ROP chain for this. It’s further complicated by the watchdog mechanism, which reboots the whole switch if you crash the LLDP process while trying to brute force it - a great way of DOSing the infrastructure, but makes it a bit painful to exploit. With persistence, though Frank was able to brute force a ROP by using a 7-bit overflow to trigger a ROP chain.
So what can you do?
- Update.
- Think about how you use your fabric.
- Control access to the management interfaces.
- Disable LLDP if you can.
Sophia Frenz
I missed the start of this because I’m an idiot, and swanned in at the point at which Sophia was explaining that claims you can get your DNA sequenced anonymously, well, that’s a lie. Because there are papers demonstrating the degree to which it is possibly to de-anonymise via genomes. Which also means getting your genome sequenced implicitly reveals information about everyone - parents, children, etc - related to you. So we now have a dilemma: do we adopt a traditional medical model, where we do closed science with rigorous controls around who can access your genomic data. But it has a lot of weakness compared to open science; in open science we get broader sample sizes, we get better results, because we’re not limiting researchers to a closed system with a small, often non-representative sample of data.
So closed science can not only deliver poor results, it also takes power away from the individual: you may not even end up with control over your own genome! But there are terrible pitfalls for open science as well: there’s no real privacy or security! The accuracy of information to end users is questionable. And most of all, what does consent look like? Because if your genome is in the open, and hence available to keep running new research on, have you consented to that new research? Maybe not. But it’s too late. It’s also a tremendous enabler of discrimination.
So does your sequence being in the wild really matter? Well, a scientist would say not: after all, the gene doesn’t really tell us much about the expression of the gene. Insurance? Well, there are some legal structures that are supposed to stop insurance discrimination, so maybe not. But socially? Well, what about uncovering familial relationships that are unexpected? What sort of social outcomes can appear? And what about broader medical privacy?
Here Sophia pauses for the slide that reminds her to breathe.
So let’s talk about electronic medical records versus physical medical records. On balance, Sophia likes electronic medical records. Because while there are problems, she notes that physical records also have security breaches; when she got her records mailed to Australia. Along with her brother’s and a complete stranger.
But for day-to-day medical use, electronic records can be very helpful. But there are definitely problems: for example, the Australian health record system is grossly insecure. But the problem is not the technology specifically: it’s an outcome of the terrible handling of patient confidentiality in the healthcare system: physical records left lying around, people able to hear patient conversations in the A&E, and so on. So there needs to be a culture change around infosec. Not “infosec with computers” but with “health information generally”.
So what can we do:
- Responsible individual decisions. Opting out of bad systems where ever it’s possible. Create pressure to change systems for the better.
- Educate non-sector friends, colleagues, and clients on needs, requirements, and values. People can be interested if we tell good stories about the problems. Moreover, doctors and other healthcare workers may not understand something as simple as how easy it is to hijack or fake a phone number.
- Be the squeaky wheel! Change comes from people making the right noise to the right people outside conferences.
“Responsible decisions” are about thinking about the benefits and balancing them against the risks, both to yourself and to others who can be affected. Maybe you’re OK with 23andme sharing your genome with Facebook, but is it OK to share your genomic overlap with your kids and parents with Facebook? What benefit are you getting?
Physical Access Control on Seasame Street
Matthew Daley
Matthew’s slides went west recently, so he’s had to re-build them at the last moment. Moreover a lot of the detail isn’t really presentation-friendly (pages of hex codes), so there will be more information later.
A modern electronic lock system is a mix of controllers, readers, and locks. A user’s card is scanned, the scan is sent to an IO breakout board, sends the data to the controller, which lets the breakout board know whether to unlock the door.
So in Gallaher land, the Controller 6000 is their state-of-the-art for access control. Cardex was a brand before Gallagher bought them out; the lineage is direct, so Matthew uses the names interchangeably. It’s based on an actual 80386 processor, and runs an embedded OS that was almost impossible to find the books for.
There are a number of generations, of varying age, pedigree, and feature sets. Gallagher use muppet names for their products: elmo is the controller PC, for example. Weirdly enough some of the newer readers are running QT and websockets!
From a card hacking perspective, there are LF (low frequency) and HF (high frequency)cards. You can’t tell from looking at a card, which is which. Some use the Wiegard effect - a form of induction - to provide power to the card and also for the card to return data to the reader; there are a variety of encoding schemes, depending on which specific card is involved. The card has a 24-bit number, a 16-but facility code, and a 4-bit region code, and a 4-bit issue level; these will be unique across the cards. There are also padding and CRC bits. Stripping them out allows Matthew to work out the meaning of the remaining data - which turns out to have a simple substitution cipher applied to it. Matthew’s demo video shows that with an LF card and an understanding of the encoding, you can simply enumerate card numbers until you find the one that unlocks the door you want.
HF cards are more complicated. They’re still Wiegand effect for power and data, but data is bi-directional, and there are controls over which blocks and be read and written. Moreover, there’s shared-secret based security via mutual-auth, using nonces to prevent replay attacks. Unfortunately, Mifare Classic keys are brokem: if you know one key, you can brute force the rest. And cards almost always have at least one well-known card. This makes it trivial to break locked sectors and ultimately perform similar enumeration attacks. There are a number of generations that attempt to fix this, but sadly they’re all broken in different ways.
The latest version of Mifare DESFIRE card is now file based, rather than block based. This is intended to allow multiple uses for your cards. In Gallagher’s scheme this allows a thing called “Key Diversification” where each site can have a unique key, combining it with the card keys to make it harder to break the encryption. Unfortunately there’s also an option to use the MIFARE default key; if you know this you can once again brute-force the cards if you know the default key, which Matthew is not going to share.
But wait, there’s more!
The protocol between the card reader and the controller has 3 variants. The oldest Cardax protocol is a simple 5V encoding which can be MITMed by tapping into the two-wire protocol with a fake reader based on an Arduino. You can emulate a reader interact with a controller and enumerate the entire card space in four hours. GBUS is a somewhat newer protocol which gives more sophisticated control options, such as switching the locks on and off, or activating and de-activating window sensors. Matthew has a decoder for this as well.
HBUS uses RS-485 serial protocol, which is a great deal faster (up to 1 Mb/s) and more sophisticated (it can validate connected devices, unlike GBUS and Cardax), and supports public keys over the wire. Each device is signed by Gallagher; there are no known vulnerabilities in HBUS - to one finding is simply that by upgrading to the newest protocol and not supporting the legacy version will actually make your building a great deal more secure.
Unfortunately, given that DESFIRE with the non-default key, is the only way to be safe and secure, out of a sample of 95 readers a third are still using LF and are therefore insecure. 85% are using HF, and only one uses the non-0default key. 1! And it’s a government department. Turns out if you scan their readers you get your laptop confiscated.
Source will be available on Github at some point. Matthew notes that Gallagher have been very supportive of the research.
I appreciated the talk, but when you know you’re quarter of an hour over time, stop talking. It’s incredibly rude to the following presenter, and the audience waiting to hear from them.
Securing People Who Don’t Look Like You… Yet
Laura Bell/@lady_nerd
Laura notes that she feels a bit different; she hasn’t popped shells for a long time, and neither has she been “hacking people” for a while. This is a story about how, over 6 months, Laura discovered the hard way that what she thought she knew was garbage in a hard period of about 6 months. So this is a talk about what she learned from that time, and to share that with us.
“About three years ago I turned away from doing security, I was doing other things. This time last year I was literally having a baby!” As she spent more time with people who were not in our space she spent more time realising that her security voice - which was judgey and unhelpful. Because although it was easy to see the shortcomings in people’s security, her understanding of how to share that information, or how to solve security problems, was sorely lacking: this was via her relationship with Rose (name changed), a fiercely independent 78 year old friend. Rose was a very different person who’s lived a very different life.
It’s easy to assume that our life and experiences transfer to other people; what works for us doesn’t work for other people. Why is that, Laura wondered. Well, when she looked into research about how we think, one thing she discovered that we see the future version of ourselves as a different person. Laura’s brain doesn’t even know how to protect future Laura from present Laura, so how can present Laura protect another older person? It’s hard!
And what makes it even harder is that aging doesn’t affect us all the same way: circumstances, genetics, illness, attitudes, diet, culture, community, and family all change that. The complexity of possible personae explodes over time. So if it’s hard to cope with managing the security controls for a small company, what does it look like for a huge range of people.
Mistakes were made: the first question Laura asked was “should we be digitally active if we can’t be independent?” It was, in retrospect, a terrible question! But it’s worth thinking about. Because digital is everywhere! Smartphones, smart TVs, smart everything is ubiquitous.
Laura reached out to Age Concern - and it turns out the their recipe for life and healthy aging ticked off a list of items - friendship, curiousity, and a horde of others - that many of us satisfy online! So how could we not support people being online safely when that is increasingly our whole world. Moreover, it turns out there’s a framework for maintaining independence, and then supporting and assisting people as they require independence. This involves measuring (via items such as the Katz index and Lawton - index) day-to-day independence. But it doesn’t cover digital living; so Laura started building an Instrumental Activities of Digital Living:
- Shopping.
- Financial management.
- Finding information.
- Choosing and adopting new technology.
- Recovering from unexpected evens.
- Communicating.
- Ability to operate devices.
- Entertainment.
From there we can score how independent we are, from fully independent to fully dependent.
It turns out there’s some real problems in the legal aspect of this: for example, under current law, operating Facebook on someone’s behalf can be considered fraud, even if it’s help they want! And more broadly we want to think about how we support people in detail. Consider password management: a fully independent person might have password safes and MFA. A less independent person might be better off with a notebook. But a fully dependent person might need someone else to work their bank account - so how do we allow that kind of delegated authority? How to we audit and track? Because we need accountability, we need to trace people are doing things to help and not to harm.
We have to protect one another. We have to get better at collaborating to protect people, for our common good.
I loved this talk, and will be pestering people in my workplace3 to think about what we can do on this front.