Kawaiicon Day 2

While Ada and Rosa wait to get into Kuracon, the spin-off day-long con for small people, I sit in the main hall looking down on a fenced-off Tesla coil which promises to play music at some point. It’s been a morning of two breakfasts, one at home and one with friends at the wonderful Leeds Street Bakery, so here’s hoping this doesn’t end up with me nodding off in my seat; something which is unfortunately more likely given that insomnia kicked me out of bed in the wee small hours.

sput appears wearing the Wellington hipster uniform of a a dressing gown worn in public; meanwhile it emerges there’s been some confusion as to when Kuracon starts, so the kids have been ushered in to take a squizz at the musical Tesla coil, which seems like a sound decision (no pun intended, but I am leaving it in anyway).

8 bit control of high voltage for musical purposes

Josh Bailey

“I’m going to attempt to entertain you with this sensibly-sized Tesla coil and a C64 SID synthesiser”. After a safety notice - don’t lick the Tesla coil - we get a performance of the Tesla coil (which is great), and then Josh explains how it all came to be: he did some work in the Nevada Lightning Lab (which is based in San Francisco), where they use Tesla coils to investigate the different characteristics of lightning strikes. This has scaled up has high as 50 foot sparks drawing hundreds of thousands of volts. Josh explains the basis of Tesla coils: a kind of transformer. Capacitors are charged from three phase power, and then fed into the coil to deliver a substantial (tens of thousands) of volts. At this point, a discharge will generate a “satisying bass note” and can cripple EMP sensitive electronics in the vicinity.

Josh discusses the country’s largest Tesla coil: an art installation on Alan Gibb’s sculpture farm, which has it’s own dedicated 135 kW feed from Transpower. Apparently it you’re brave enough, you can stick a broom handle into the running field and redirect it to throw lightning bolts to the ground, entertaining your Zeus fantasies. The installation has the ability to tune the note it produces somewhat by manipulating the spark gap motor, but it’s rather imprecise.

For the first coil Josh built in New Zealand he used an IGBT transistor which is capable of switching tens of thousands of amps in as little as a millisecond, and can do it repeatedly. This, by the way, is an export-restricted part that require a lot of paperwork to get into the country. This allowed more sophisticated modulation of the note being produced by the coil; the first version of the controller took MIDI or PCM as an input to modulate the spark gap, and hence the note produced by the coil.

After proving the theory with the university School of Design, the next step was to make something that was more musician-friendly; after that, they moved on to a band consisting of a robotic drummer, a robot bass player, and three Tesla coils operating at different frequencies. This mostly worked well, apart from the bit where the Tesla interference affected the drummer’s actuators, causing it to catch fire and need to be “thrown out the window to avoide triggering the fire supression.”

Josh, however, grew up with C64s and SID trackers. How to meld his interest in his childhood and his Tesla coils? He med up with GOTO80 an Ana Edgington; GOTO80 is a chiptunes artist, while Anna provided vocals. Over time, he;s worked with a variety of musicians to work out how to make this practical for musicians: as far as Josh knows it’s the only coil that can perform effects such as pitch bending.

Finding a poisonous seed

Negar & Noushin Shabab/@NegarShbb & @NoushinShbb

The talk is focused on attacks aimed at developers in order to compromise software via development and build environments; from a hacker perspective, if you can compromise at source you can compromise thousands or millions of people with no further effort on your part, rather than trying to compromise everyone who runs the software again and again: a supply chain attack.

Negar begins by walking us through a history of thinking about who we trust and why we trust them - starting wuith trusting applications: do we trust applications to be safe from malicious behaviour? Of course not. There are different approaches to this problem: perhaps you only obtain applications from trusted sources; perhaps you go a step further and inspect the source code. That doesn’t ensure the binaries you download are the same, so perhaps you build it yourself… which leads to the Trusting Trust paper that Ken Thompson delivered when he and Dennis Ritchie won the ACM Turing award in 1983. Noushin outlined this paper for the audience, and then took us back any further: in 1974 the US Air Force was using MULTICS, which they built for themselves; they performed a threat analysis and published the findings, noting that a “trapdoor compiler” could tamper with the operating system build invisibly.

INDUC

INDUC was a real-world example of this, found in 2009; it targeted the Delphi IDE to compromise every program compiled with the IDE. It’s believed to be the first real-world example of this risk. Initially it infected the Delphi IDE from 4 to 7. A subsequent version, found in 2011, targeting the SysInit.pas file used as part of builds, rather than the IDE proper. It would pull down other payloads from a remote site, and infected the as-built binaries, whereas the first two versions didn’t have the ability to pull down malicious payloads (I assume then that the first two were attempts to prove the value of the technique by understanding how widely the trojan could spread while doing nothing, but Negar and Noushin don’t address this).

It was notable that this was not theoretical; it ended up infecting real-world software, such as the AIMP media player, which was hosted by the authors of that software, and downloaded by legit users.

Version A worked by backing up a legitimate SysConst.pas file, replacing it with a malicious version on the developer machine. From there, it is compiled to a SysConst.dcu, which is linked against the software the developer is building, producing an infected binary. Version C also included the ability to pull malcious payloads from a remote network.

XcodeGhost

This is the next example; it was found that attacker had compromised the Xcode build environment which is of course the most popular tool for building OS X and iPhone software. It resulting in many successful compromises, particularly in Chinese-language applications. It worked by adding an extra path to Ld.xcspec to load extra malicious versions of the CoreSpec libraries, leading to all built software being with the compromised payload included.

ShadowPad

Found by Kapersky in 2017, it’s a sophisticated attack attributed, by Microsoft, to a non-APT actor.It was uncovered by Kapersky in conjunction with NETSARANG Computer Inc, the publishers of the XManager server management tools; Kapersky found suspicious behaviour in NETSARANG’s software; the latter co-operated with uncovering the cause of the problem. As Kapersky dug into the cause, it became clear the NSSOCK2.DLL shipped with the application had been compromised with a shellcode addition, which would then download further malware to the server.

So how did this happen? Well, the developer workstations had a malicious mscoree.dll installed on them, loaded with deprecated functions; alongside this the Visual Studio link.exe linker had been compromised, which would load the embedded shellcode when any application is being built. The initial shellcode is tiny and encrypted, and is a bootstrap to pull down a larger payload.

CCleaner Incident

CCleaner, produced by Avast (author note: and terrible software you don’t want anyway). In 2017 Avast disclosed that CCleaner was being attacked by cyber criminals, and was being distributed through the official binaries downloaded from the Avast web site. When further investigation was performed, it turned out to have a ShadowPad payload, and presumably came from the same threat actor.

ShadowHammer Operation

Announced earlier in 2019 this targeted ASUS users. The update tool provided by ASUS, and officially signed by them, had been compromised; as Kapersky investigated further, they found that many more gaming companies were compromised, with game binaries being contaminated.

When they investigated the developer machines it was a very different attack; there was a difference of only a few bytes in the link.exefile, which was otherwise identical. THe msodb.dll was called in the linker, even though it didn’t exist; the library had only a single, empty function. Very curious! It relied on a side-effect of how Windows loads libraries to enter malicious code; it looks for references to the msvcrt.lib, a legitmate library, and tweaks the name to msvcrtm.lib, the malicious library; the result is that every binary built by Visual Studio will now be compromised.

Summary

It turns out that this has moved from theoretical attacks to practical attacks of rapidly increasingly sophistication quite quickly. Attackers have had a lot of success with these attacks, since the software is build as signed, legit binaries. It is becoming clear that it is increasingly clear that developer workstations need to be better protected against threats.

Seeing the Invisible: Finding Fingerprints on Encrypted Traffic

Adel/0x43d1

Most of your network traffic already encrypted, so how do you tell if it’s legit traffic, or the bad stuff? By bad stuff we’re talking exfiltration, C2, and so on. As more and more protocols default to encryption - including DNS-over-TLS or DNS-over-HTTPS - and the standards become more and more resistant to being broken - such as TLS 1.3’s anti-middlebox changes, can we use network metadata and fingerprinting to uncover what’s on our network>

The key to all this is that cryptographic protocols need some plaintext negotiation to start exchanging encrypted streams. In TLS, for example, the ClientHello structure contains some useful information: protocol versions, cipher suite, the extensions supported, and so on; so from there we can begin to fingerprint. John Althouse and a team at Salesforce have a robust fingerprinting system for using this TLS metadata to understand what the client is up to, and what the client may be. Adel explains that this can give pleasingly accurate information on the OS and applications being used.

Based on this, Adel wanted to see what he could do with RDP; it’s an interesting protocol because it’s used in certain breaches and commonly used for lateral movement from an initial breach; being able to see it being used in ways you don’t expect - between clients and servers that shouldn’t be communicating, or being used by a version of Windows that shouldn’t be on your network, for example - could be really useful for defensive teams.

In one RDP security mode, it’s using TLS, so it’s quite feasible to fingerprint and identify the client in quite a lot of detail. In another mode, it doesn’t use TLS, but a protocol-specific encryption with a Basic Exchange Settings set of fields, which contain interesting data: keyboard settings, versions, encryption methods, and the like. From this, Adel has extracted groups of fields to generate a fingerprint that he can then map back to specific versions of the TDP client. This might be Windows 10 - or it might be bluescan, a tool that looks for the BlueKeep vulnerability.

Looking at traffic in the wild, and combining fingerprint data with source IP traffic, Adel was able to create a visualisation which maps out clusters of traffic and fingerprints, he was able to see that certain IP addresses have suspicious patterns of tool use; for example, an IP address that is randomising their clientHello fields in a repeating pattern that doesn’t correspond to any real-world set of RDP clients. That’s a very un-natural behaviour; ironically this effort to evade detection makes them more obvious, because the patterns of hundreds of connections with randomly differing ciphers stick out.

So we can still use network traffic for some sorts of network monitoring; we have to rely on the metadata and clever analysis of the initial handshakes to draw out what we want to know.

The Story of the Uncrackable Lockbox and Why Hackers Need to Work ALongside Developers

Matthew Ruffell

TimeLock is a homemade encryption program, made by /u/cryptiocomicon of AlgoMachines; it claims to implement an unhackable, time-sensitive lockbox aimed at securing bitcoin. There were a series of challenges on reddit offering a 0.02 bitcoin bounty for anyone who can break in. I will apologise in advance, noting that the finer points of some of the breaking Matthew did is a bit beyond me, so I may not have accurately recorded it.

For each challenge the author provided most of the details to open the lockbox, so that the test would be purely one of whether the time-lock is working.

For the first challenge, Matthew started by inspecting the binary for strings: the errors that are thrown when you can’t open the lockbox. It turns out that there’s just one error message that tells you it’s not time to open the lockbox. From identifying where this string lived in the binary, he was able to find where in the code the conditional branch that determines success or failure, but finding out the instruction sequence that can jump to that error; from there it was easy to patch the binary to always jump to the success criteria, irrespective of the time, and run it. Voila, lock broken. The lesson here: if a binary is running on someone else’s machine, you need to assume they will try tamping with the file to get they want.

For the second challenge the code around determining success or failure was made more robust. Here Matthew again checked the error string that pops up when you can’t open the box, and walked back to the code branch that goes to success or failure. Here the developer had encrypted the evaluation criteria so as to make it harder to patch the binary; however, letting the program run, Matthew could observe what was happening with the code as it ran in memory by stepping through it with a debugger, and then was able to manipulate the in-memory values as the program was running, breaching the box. Lesson: if the attacker controls the environment, you can’t assume that memory is safe, either.

Here the author began to step up, using the time comparison from the blockchain network itself to determine success or failure, and hardening the local code further. For the third challenge, Matthew decided to use the lockbox’s connection to the blockchain against it via a Sybil attack. He traced the DNS requests that TimeLock looks up with hardcoded DNS lookups to seed nodes; Matthew changed the Windows resolver to point at his own DNS server, forced it to point to a set of fake seed nodes; they reported a fake time that allowed the safe to open.

From there the author put more controls in place, integrity checking the seed nodes so fake seed nodes won’t work any more. This makes it a lot harder to break. So Matthew tried another approach: attacking the crypto in the fourth challenge. He was able to substitute the symmetric encryption cipher and open the box. This lead to the author of TimeLock added more rounds of encryption: some of the rounds were the same symmetric cipher; the second round wasn’t, but it uses hardcoded constants which Matthew was able to manipulate to unroll the encryption.

Matthew noted that the takeaway here is that author was, by working with attackers, able to dramatically improve security of their application fairly quickly, and for relatively little cost in terms of the bounty - he would like to see more co-operation between software authors and attackers to imrpove security.

The Call is Coming From Inside the House - What Data Do You Need?

Michelle Burke/@smrtgirl

Michelle works in data analysis/big data; she has sat in meetings where someone says “we don’t know what we want, just collect everything and sort it out later” and you probably have, too. She’s ashamed to admit that sometime she hasn’t pushed back on that hard enough, but now does, and wants us to think about why we should: it’s estimated that 1.7 MB of data per person per second will be collected by 2020. Storage used to be considered a reason to be moderate about what we colelct about people, but nowadays storage is no longer really a constraint. But that doesn’t make it a good idea to blindly collecteverything: for one thing it’s so much data that the data you don’t care about - and the garbage data - makes the good data useless. And, of course, you’re can’t able to secure and manage it all. If you’re collecting PII, are you even complying with the Privacy Act here in New Zealand, never mind the GDPR? What does it do when you have a breach?

Riffing on Cory Doctorow’s comparison between stockpiling data and stockpiling plutonium, Michelle wants us to stop collecting everything. Collect what you actually need. For example, if you want to do a loyalty program, what do you need? An email address? A name? A nickname? If you want to send a birthday voucher to encourage people to spend, maybe just ask for the month, not the day?

And that’s just be beginning: the depression.org.nz site, while offering a lot of good tools, was breached and leaked individualized test results. That’s because their quizz URLs allowed enumeration attacks. And the Hotjar keylogger/clickstream tracking to do analytics. And who knows where that metadata ended up. Or what else it’s cross-referenced with. Or who will end up buying the analytics company.

Unfortunately, particularly in the case of non-profits, funding is short. It’s fight to get people to do the right thing.

Consider 3Fun, a dating site for threesomes. A security form found that it was an absolute disaster area, exposing chat data, sexual preferences, photos, and basically everything. It identified users in the White House and all over the show.

So what is to be done? Ask questions. Insist on keeping the minimum of data. Share news about data breaches to help people understand the nature of the problem. Be an advocate. Help people understand that we have a duty of care when we deal with data.

Decrypt Everything, everywhere

Burke Kauer & Dennis Mantz

Every year they look at a different security appoliance, and generally find them full of holes: this year is their Vormetric crypto appliances. They’ve won security awards in 2014 and 2018. They’re considered the “holy grail of cloud encryption”. Their appliance purports to do everything - an HSM, a key management solution, that encrypts all your data, acting as a middlebox for all your cloud data, filesystem, etc. You data is never unencrypted at rest.

So the first job was to get root on the device. If you try to configure it over ssh or a serial console, you get only a restricted shell which filters special characters and doesn’t allow you a regular shell. The commands available in the restricted shell are wrappers around regular shell commands. One of those is the ip tool which can do quite a lot of things; including for example creating a new namespace and then spawning a shell in it. Which is a root shell.

Whoops.

By default there are two network-facing services. There’s no ASLR globally, which seems like an odd choice, and the binary for the admin listener is built without ASLR and no stack protection - “what the fuck?” At this point, Dennis wanted to build a fuzzer to try to find ways into the binary, using Frida Stalker and Radamsa as well as their own code, which they’ve released to Github. This let them find a denial server, throwing the raserver into an endless loop that burns CPU; this is the second vulnerability.

Next they started looking for other vulnerabilities. They have a bash script that parses SSH logins for reporting. The script is poorly-written, and has an unauthenticated RCE, because it doesn’t sanitise the input from the SSH logs. Yay! Unfortunately that script doesn’t run as root, so the remote shell needs to be escalated. One method uses the previous invocation of the ip command.

But there is another! If you’re local to the box, you can connect to a port on localhost and gain a root shell.

So they went to the vendor to disclosure. Thales response was “your root shell is not possible.” They have refused to issue any CVEs or issue any customer advisories or patches.

Liar, Liar: A first-timer ‘red-teaming’ under unusual restrictions

Mike Loss/@mikeloss

The Brief: When Mike was a new pen-tester, he got asked to work on a red-team. Unfortunately the client’s ideas of what red-teaming is would not pass muster with any reasonable standard: a very tight constraint - no attacks on anything except a particular server, no lateral movement, and no defacement that was political or obscene. Mike tried and failed to convince them that this was unrealistic. Oh, and the hacks had to be in business hours, and the testers couldn’t lie to the staff.

Lesson #1: If the client asks for something dumb or impossible, ask what they ACTUALLY need.

THey decided to give it a whirl anyway. So they prepared: the physical site they were attacking was very hard: back from the road, mirrored windows, in a rural area, no wi-fi.

Lesson #2: Don’t make your drop boxes look like bombs.

Mike over-prepared. He kitted himself out like he was breaking into the Pentagon.

Lesson #3: You are not James Bond. Don’t bring a pile of thing’s you’ll never use.

Not allowed to lie to get in? “I’ve been sent by IT to do some network testing. I just need to find a port to plug into.”

Lesson #4: Half the truth can be a great lie!

After being shown to an empty office, Mike poked around to find power and a live port.

Lesson #5: Don’t panic, and build battery backup into your drop box.

Next, he started looking for a place to put in his second dropbox in place. After hearing someone in an office talking about cybersecurity Mike experienced a moment of panic… except it was the sound of someone doing their compliance training. Popping the second dropbox in place proceeded without further incident. From there on in, leaving the building and hacking the server.

Lesson #7: Nothing is too dumb to be worth trying.

Without politics or offence they went for a cron job popping up 80’s ANSI art. Which worked splendidly until someone took it offline. So they found a vulnerable server, and re-pointed DNS to it. At which point the client rung to explain that the staff were panicking about the defacement of a powered-off server.

A year later, the client came back and wanted to do an incident response test. This lead to a hilarious report, which started with popping “a video of Russians dancing to folk metal” on a security user’s desktop. No response. “An ascii-art animation of Risk Astley.” No response. A pop-up notification “Hi”. A PC reboot, but no incident response. Another ascii-art animation followed, this time on another security user’s PC. No response. A pop-up saying “PAY ATTENTION TO ME.”

No response.

A pop-up saying “LOL HACKED” while playing Thunderstruck by ACDC.

“OK, I think they’ve found you on the SCCM server.”

“I’m not on the SCCM server.”

“OK, they’re going to cut off the Internet connection.”

How is Scooter

“You may remember me from such exciting adventures as examining the applications used to rent scooters.”

  • Frontend scooter APIs leak lots of data.
  • Track people as they ride.
  • Locate every scooter.
  • But no crimes.

But this year?

  • Actual crimes.

Don’t do what Matthew does unless you’re reasonably confident you won’t be. While Matthew will be presenting this quickly, the work was slow: “I’m not actually clever, I’m just bad at giving up.” Voiceover: That was a lie.

We’re going to be talking about Bird scooters. In the past these were mostly commodity scooter with bits hacked on; these days they’ve become more custom-build for the wear and tear of commercial use. However, the controller box is the same on all the models, bespoke or no. Thanks to the FCC web site, Matthew was able to get pictures of the board, which let him identify various interesting components, including the Nordic modem and the STM32 controller, as well as identifying the debug pads Matthew was able to hook into with his ropey home-built cable.

Matthew notes that his cable was almost as much a hinderance as a help, seeing as he made it in a rather second-rate fashion. He would advise you to build better cables if you want to do this.

Because the debug pads allowed him to speak OpenOCD and ST-Link, he was able to dump the memory state over the cable and even run GDB remotely.

Having dumped the binary, he found that running strings over the brainbird.img file mentioned a “Free Ride” mode which is awesome because free things are good. So putting that into the Ghidra reverse engineering tool, which knows how to turn ARM assembler into pseudo-C, allows Matthew to identify where the “Free Ride” mode can be called from: it comes from the Bluetooth controller. So how do we break into that?

Well, happily the Nordic nRF52 is another ARM processor with a number of cores running a mix of low-level Bluetooth code, and a higher-level operating system, which is based on the published SDK. By eliminating the code that Ghidra can identify as being derived from the SDK, Matthew was able to focus on the non-standard code and work out which code paths seemed likely to allow an unlock. While his first effort to get free rides didn’t work so well, searching further in the code found some complex maths, with big blocks reference data associated with it: Googling some of the chunks of reference data told Matthew than the code was AES128.

Fortunately Nordic will happily sell you dev boards: this allowed Matthew to dump the firmware from the scooter onto the board, which works identically. It is apparently a lot more pleasant to hack on firmware through a pleasant dev board with USB interface than it is to try doing so over the previously mentioned dodgy serial cable. Happily it turns out that there are some poor choices around the encryption protecting the ability to send commands and have the Bluetooth controller accept them.

So does that work?

Yes. For two minutes. Because it phones home after two minutes, and home notices you shouldn’t be riding for free. So it shuts you down. Which is unfortunate, althought evidence that the folks at Bird have put a bit of thought into this.

So what about that Internet? Well, the modem has a full Linux image with a default root password which can be found with a little Google; Matthew notes that Google makes a lot of reverse engineering easier. With a little poking about, you can control the modem via gdb, much like the Bluetooth stack. To get the modem online and masquerade as the scooter - and hence convince Bird to give you free rides, you will need a number of things: the IMEI the serial number. You can use the IMEI the normal modem operation uses, and the scooter broadcasts it’s serial number over Bluetooth (handy!). With all that, you can be a scooter, and instruct the Internet that you’re now doing free rides. At which point the scooter is unlocked.

Matthew notes that this is, in fact, somewhat illegal; he has just kind of had a go at this, and not ended up being arrested. If you try to replicate this, be aware you are doing a crime, and if you get arrested as a result “I have been a poor role model.”

In conclusion, Matthew would like to thank Bird for not suing him, and notes that they have fixed a number of the vulnerabilities he uncovered.

Securing Kubernetes with K-Rails

k8s is an open source container orchestration solution; we rattle through the basics of k8s; the threat model they’re interested in are not admins or unauthenticated users, but rather users with creddentials who are doing the wrong thing; perhaps attackers,butemoreoftensimplydevelopers gertting wrong.

securing k8s can be difficult, andtheree are a lot of routes to priv esc if you get it wrong, even if youget a lot of the basics right. examples include allowing mounting hosting filesystems, adding capabilities, or host network. So you want to be careful what you allow container images to do. mounti g the docker socket is proably one of the worst - you ae running on the host as root and can seize the kubelet from there.

and while clouds can make things better by taking the basic security steps - but they can also make it worse. for example, the gcp provided metadata API can allow a container to, ultimadtelh, pull all the secrets in the cluster: passwords, certs, and soon. there is tooli g to do this automaticaly.

So what to do? Well ideally ypud have a tool that sets tje right defaults, does so without interfereing with tbie cluster users, and without breaking anythiinf it shouldn’t.

louise.cornette76@gmail.com

c4307fd13b8d0b46ce9396fa9f3f326c8418986c1549c1caefa69a2a001bada

Share