Tracking the Watchers: Practical Tooling
We are going to build a map of radio systems in your city - this is a work in progress, so if you want to use this you’ll need to do some work yourself as well. The systems Paul are working on are P25 radios, which are trunking systems wth centrally controlled channels and frequencies.
Intercept nodes, scattered around the city, are:
- BladeRD 2.0 micro; a good dynamic range and quite sensitive.
- Precision frequency reference; phase accurate with one another, so they ensure everything is properly synchronised.
- u-block M8F GPS. Again, this is for precision across many units in one city.
- Antenna; standard wideband antenna that doesn’t require a groundplane; filters: CBP-804F and ZX75BP-770-S+.
- Lenovo M910q: small, low power, decent USB3 and a quad i7 with enough grunt to process the samples.
- Only use one GPS system at a time.
- Experiment until you get the lowest noise GPS source.
- You may need to experiment with different GPS disciplined Oscillators.
- This one doesn’t require constant tweaking.
- 20 ns jitter.
- PPS to Radio trigger ~1 sample (GPSDO helps).
- The GPS PPS triggers on the falling edge, which was undocumented. And it stops capturing silently if you pull the trigger high by mistake.
- “Linux is terrible sometimes when we’re trying to do something important.”
- The USB driver needed some aggressive settings to avoid packet loss.
- Different USB3 chips will have different rates of loss.
- Not all SSDs are created equal; you may find yours has inadequate performance.
- Radio Config
- Internal gain at 100%
- No external amp.
- Signal processing
- Capture at 61.44 MHz.
- Filter down to 799 - 805 MHz.
Look, I’m going to have to stop pretending to understand this well enough to transcribe it at this point. We’ve gone into FFT processing of radio signals to identify whether we’ve got the right radio protocol, which when I say it like that sounds pretty straightforward, but we’ve got slides full of chains of FFTs and talking about demodulation. I should note that this is a fuction of my ignorance, not the presenter’s ability!
A good presentation, but I’d need to know a lot more about RF to make sense of it.
Arbitary Code Execution, I Choose You!
_Sarah Young @sarahyo
A (very) brief history of interesting home console history fails.
- Nintendo Switch Tegra vulnerability
- Affects Tegra X1 chips prior to the T186/X2
- You can force the Switch into recover mode by shorting some pins that the Joycon uses on the right hand side.
- The chip enters recovery mode, which allows small bits of signed code to be loaded into RAM.
- The exploit is a command and payload which is loaded into RAM… and then the signature check runs.
- This race condition yields a window to own the switch.
- You can brick the Switch. Try at your own risk.
- Nintendo, in order to stop people subverting the mobo, destroyed all the connectors that allow you to flash the ROMs to fix the problem!
- Back to the 80s!
- The NES anti-piracy chip/lockout chip.
- The console would go into a reboot loop if the cartridge failed.
- Atari and NES had a falling out.
- Atari decided to break Nintendo’s cartridge monopoly.
- Because the anti-piracy system was patented, Atari were able to obtain the blueprints to the piracy chip.
- Sadly, Nintendo were able to block use of the carts in court.
- Pokémon #000 (aka Missingno)
- Wild Missingo appeared!
- A glitch Pokemon used for testing and for filler in the original NES data structures.
- With some in-game behaviour the Pokémon could be coerced into appearing with a particular action.
- This was a function of the programmers forgetting to include the correct Pokemon.
Are there any takeaways from this?
- Tell people stories they can relate to so they can understand what we’re worried about.
- Hardware vulns are really hard to fix/patch.
- Your opponents may go to great lengths to achieve their goals.
Mūrere me te haumarutanga
Chris Cormack @ranginui and Ian Cormack @kiwitoa
A Māori introduction; this was planned as a joint talk with his father, but he had a clashing engagement. In New Zealand we have Māori as an official language, and we have schools which teach in Māori; Chris learned Māori as his first language, but is slowly losing it because he doesn’t use it enough on a day to day basis; that’s exacerbated by his job because in our industry we don’t have enough Māori words to use, so even when people want to they end up falling back on English terms.
Chris’s dad has been trying to fix this; he develops new terms and phrases from existing Māori terms:
- Mūrere - be clever, intruding. To hack, hacking.
- Haumarutanga - haumaru - safe, risk free. Security. “Adding tanga converts a noun to a verb”.
- Wheinga - adversary, opponent.
- Whakahōtuhi whakawhiti pae - to script + accross + site. XSS.
- Whakeke waengarahi - attack + intervening space = Man in the Middle attack.
- Whakaeke pureirei engaenga - attack (a pā) + = buffer overflow
- Whakakore ratonge - deny of service attack.
- Pūkahatanga pāpori - social engineering.
- Hītinihanga - Phishing attack.
- Whakamātautau ngoto - pen tester.
- Rēhita tūraru - risk register “Māori have some experience that you write something down nothing changes and everyone ignores it.”
- Hinu nakahi - oil + snake - virus scanner.
- Whakahaere tapi - patch management.
- Whakaeke rā ōrite - zero day attack.
This was a really neat talk, and hinu nakahi got a huge laugh from the audience. You can see (https://t.co/Xd3jFuM0va)[Chris' slides] now.
Red Cell - Mimicking Threat Actors for Realistic Responses
Background: Google faces a broad range of attacks - particularly APTs such as nation states. Red Cells are a program based on an idea drawn from the US Navy seals; a red cell is a planned set of attacks, much like a red team, but unlike a red team the red cell try to mimic the behaviour of a specific adversary, using only their techniques and tactics. Furthermore, unlike a red team, where exercises are scheduled and the blue team will have at least some idea things will be happening, the red cell is completely anonymous to the blue team.
- Building the Red Cell:
- Threat actors have TTPs: tactics, techniques, and procedures.
- Different actors have different TTPs.
- The big difference between a Red Cell and Red Team is that a red team will pick it’s own goals and attacks. Red Cells mimick known opponents.
- The red cell will even try to leave the same artifacts as the attacker being mimicked, such as binaries left on disk.
- The Google Threat Analysis Group helps build the TTPs for the red cell.
- Reaction and experience.
- Because Red Cells are not known to be operating.
- This means the responses of the blue teams are unknwon - they are responding as they genuinely would.
- Referees oversee the exercise providing the guidence to the red cell, and ensure the teams stick to the rules.
- Detection and Response teams confirm it feels very real.
- Pressure; accurately protrays how people react under pressure and in the face of hostile action.
- “Purple Teaming” - having blue and red folks can bring a fresh and different perspective.
- Hiding that you’re on the Red Cell can be difficult.
- Remaining impartial is difficult.
- Matching the office hours of the hypethetical atter is difficult, since it affects the red cell members.
- Most Red Cell members are still doing their day jobs. So it’s a lot to ask them to work evenings or early mornings and weekends to track the time zone of the TTP.
- Very satisfying when attacks are attributed to the attacker who you are mimicking.
- It can be hard to mimic perfectly:
- Taking holidays at normal times.
- Sometimes teams slip up by using tools written in Golang.
- Overlapping Red Cells can creating some interesting outcomes.
- If they have the same TTP it is a hint that something is going on, rather than a real attack.
- A postmortem is critical. They must be blameless, and one postmortem from the red and blue groups.
- Because Red Cells are not known to be operating.
- You need to be clear on the attacker you want to emulate.
- You want to research the TTP of that attacker and follow it closely.
Set Theory for Hackers
“I have 45 slides in 15 minutes. Please don’t try to take notes. ALso the title of this talk was clickbait.”
Also this uses mathmatics notation. I will indeed accept that I should wait for the slides later.
Hacking and the law: THe year is actually still 1998
Works as a barrister, represented Nicky Hager against the police.
“Unfortunately the law is still stuck in 1998.”
In the 90s there were no computer security laws; in 1999 a Royal Comission produced a report explaining this and recommending that this be changed. The Comission recommended that these crimes be treated as a unique category quite apart from the Crime Act and not conflate it with fraud or theft.
So in 2003 it was… tagged into the property crimes in the Crimes Act. The direct opposite of the recommendation of the Comission. And many of the terms are incredibly over-broad; in law, for example, “access” is so broadly defined that an
ping could count as access, and hence fall afoul of the law. And that has been how the courts have tended to approach the law.
Section 249 is used in almost every case. Similarly Section 250 covers any data modification, and Section 251 would cover more or less any software.
Essentially so overbroad that almost anything can be construed as a computer crime. About half of the computer crime charges laid in New Zealand is “lazy charging and lazy interpretation”, where the charges are being tacked onto other crimes: for example, people conducting regular fraud while using a computer in their day job. This is not a compuer crime!
Ordinarily judges are very good at whittling over-broad laws down to sensible applications; unfortunately in the case of computer crime this is not happening since judges don’t seem to have the knowledge to whittle it down. Although mysteriously Cameron Slater, Jason Ede, and Aaron Bhatnagar were not charged when they accessed a Labour party system without authorisation and disseminated it. The police claimed that there was not hacking so no crime - except similar offences are routinely prosecuted.
Now consider the case of security guard who is convicted of a crime where he has published digital footage from a security system. Here the footage has been treated as property.
And now we come to Kim Dotcom. If you mass-produce CDs and sell them in a market, you’re comitting a crime. But copyright infingement is, other than that, a civil matter. The Supreme Court, on the other hand, created a new precedent: if you do something that would normally be a civil case with a computer, it’s now a crime. This flies against the express decision of Parliament not to do so a number of years. And this could expand to anything. Cheating in an online game? According to this precedent, you’d be in breach of Section 249 which would render you liable for up to 7 years in prison because you breached the terms and conditions of the game.
Digital Identity: decentralised and self-sovereign
A solid mihi, which is a great framing for a talking about identity. “Identity of context-specific set of attributes about you.” Martin notes identity changes over time - who we are and our attributes.
Current models do not support this. We can do better: SSI - self-sovereign identity. Essential to the decentralised identity is at heart a public key on a decentralised server. This provides continuity across key changes, options for key recovery, and discovery.
We need a mechanism to establish trust with Verifiable Credentials: your legal name, your address, attributes such as a degree.
A correspondant interrogates you requesting the VCs it needs. You can then accumulate from third parties items such as (for example a Bank KYC) that will help attest your identity. This is managed via the KeyP protocol which is designed to allow you to manage trust and convey only the elements you wish to the third party.
But who do you trust and why? Where is the trust root (it’s not blockchain). So what can we use? What about a web of trust? Well, it doesn’t have a great track record. How do you bootstrap trust? It needs to be transparent, easy, intuitive, and democratisable.
So there should be a marketplace of identity providers, where you can establish trust in the context of a particular platform - and there can be multiple platforms which you choose. That root is the missing part of the puzzle. Back to the web of trust: we can make every wallet a provider.
A good talk, and at this point I needed a wee rest; a blessing upon the Kiwicon organisers, who arranged a quiet room where one can come and enjoy a lack of audio input for a little.
Dennis is a consultant who likes to fix broken garbage in his spare time, so that’s what this talk is about.
Disclaimer: Please don’t die.
Act I - The Security of the Ducatis on Stage
The 916, the 999 (the first bike with a CANbus onboard), and the 1098. Security in this case is focused on two cases; can you steal it, or can you cause harm to the user.
The 916 has a lock and key; the 9999 has a lock and key, an immobilizer, and mutual auth between the dash and the ECU; the 1098 the immobiliser is in the dash.
Demo time! On stage Denis is disassembling the 1098 dash so he can disable the immobiliser. He takes the rider’s seat off (it’s got no defence) and taps into a connector with a board, and fires up the Ducati, having overriden the immobiliser. The hack is simply sending the “immobiliser off” message over the CANbus over and over. It’s only a one bit change - worryingly this can also kill the running engine if the bit is flipped back again.
Act II - Firmware Extraction
This has a ST10F269 controller, and Dennis wants to extract the formula without breaking open the ECU case (which is apparently full of goo). This has a diag port which uses the k-line physical link and the KWP2000 protocol. It will take 20 minutes to fully read out the firmware. In order to pull the firmware off this interface we need to pass an ECU security challenge to enter a privileged mode.
Writing new firmware to the ECU is much the same thing in reverse, with a very lightweight encryption required to write the new firmware image. The image is a memory map of the CPU; at the bottom (top of the address space) are the bits Dennis is interested in: the fuel maps and other information that we can tweak to get MORE POWER.
Act III: Tuning Time
“If you want to make sure your children never have money for drugs, get them into motorsport.” Denis wants to trade time to avoid spending money. Denis wants to unlock the last 10% of the performance - the safety margins. The firmware disables the O2 sensor, the exhaust butterfly valve, and 10% more power.
Act IV: Infosec Bullshit
The features are becoming more abundant and more complex, with active suspension, active breaking, and all manner of other doo-dads, but even Ducati mechanics are getting less and less information. Diagnostic tools are horrifyingly expensive, and the design is terrible: all the devices live on a common bus, which is now being wired up via Bluetooth so you can app all the things. It’s great that you can see SMSes on the dash of your Ducati1, it’s not so great that other people can own the Bluetooth and kill the engine remotely.
Contrast this with the Audi equivalent: dedicated CAN transistors that prevent inappropriate information flow: the entertainment system can’t send orders to the gearbox, but the engine managment system can send (e.g.) the RPM to the dash.
“You shouldn’t have to have a fucking background in reverse engineering and using a logic analyser to turn off a fucking engine light.”
Server Room Selfies: When physical security goes wrong
Security is hard, and physical security is even harder:
- People are friendly and helpful, which is great until you’re letting people into the building who shouldn’t be there.
- HID (NFC door lock) security is terrible and easy to breach.
- Buttons to exit secure doors from the inside? Sounds great until you place them where people on the outside can reach around the door frame and press them.
- You should probably have an alarm system if you care about your stuff; but if you aren’t going to activate it, just save money but not having it at all.
- If you’re sneaking into a building late at night, you should totally switch the lights on, because then you can fake being an employee. The only people in the server room after midnight with the lights out are crooks.
Closing Ceremony & Prizegiving
Attacus wins the best speaker, very deservedly; an 8 year old who presented at Kuracon took out the lockpicking competition (what am I even doing with my life?). The MUD-themed CTF had a noteworthy moment as one team managed to claim a buggy flag that didn’t work, leaving the organisers more than a little puzzled. The Crüe heaped love on the volunteers, the backstage team, the PurpleCon organisers, the speakers, and us the humble audience. A great time was had by all.
The after party was at Leroy’s Bar, where the Christmas-themed staff supplied fantastic popcorn chicken and other treats, and one waitress kindly offered to card anyone who was feeling old.
Kiwicon 11 was another great year; I walked out full of things to think about. My only regret was not hauling the kids along for Kuracon, which I suspect they would have enjoyed the hell out of.
- It is not actually great. ↩