After a much-deserved rest, the good folks of the Crüe return for the 11th Kiwicon, styled Kiwicon 2038AD and with a nifty cyberpunk/Blade Runner theme. We have gloomy colour schemes and mirror shades. We’re returning to the Michael Fowler Centre so as to allow a couple of thousand of us to show up and learn more about the terrors of the what happens when thinking rocks go wrong.
Welcome to the Future
A late opening of the doors left me standing outside the doors wondering if metl was inside, thinking to himself “We sold a lot of tickets, where is everyone?”
As always the opening is theatrical and themed, marred for me somewhat by the sound system not projecting well to the seating at the sides of the theatre (an odd problem to have at a concert venue), leaving a lot of the intro incomprehensible. Someone is still stuck in the states.
It turns out that the stuck person is the wonderful Jessie Frazelle, so we’re missing her for the opening session.
“Just to be clear I am not representing my employer.” “Scooter! Scooters are fun!” And even more fun in a security context, I suspect. “It’s a real pleasure to be back at Kiwicon.”
What is an e-scooter? An electric scooter. With an app. “Does anyone here work for Lime? You have to tell me.”
“e-scooters are IoT. IoT is terrible. OH GOD WHY?”
IoT scooters are just put out on the streets. Which makes them a tempting target, as well as a way of making money. The minimum viable product is adding a GPS so your customers can find it; after you add features - in the forms of GSM and controller boards - your scooters a “very online” and have IP addresses.
Bonus points: can your scooter withstand people? Even better if it’s fantastically bling - for example cyberpunk underfloor lights.
Due to their popularity e-scooters have become huge business; Spun has just sold for $200 million, while Lime has raised over $400 million. This has now made manufacturing complicated: this covers a range of use cases from “crap for children” to “sold to adults and should not fall apart for at least three months”. Also customers hate it if your batteries catch fire.
(Matthew would like to apologies for exposing us to Android reverse engineering)
The apps are APKs: essentially a zip file full of code you can inspect with apktool and jadx to make sense of the raw assets; apktools turns the app into an intermediate reprentation which is mostly human readable. And then it can take your edited files and turn them back into an APK. “I don’t understand this, I’m a biologist, not a computer scientist.”
“So what’s the point? Reverse engineer apps, do crimes. (Don’t do crimes).”
Code isn’t really an important part of this: just
grep -r https:// . on the code. No, seriously. Because companies are very sure no-one will ever do this. But the important thing is actually what goes over the wire; the protocol is truth. So you want to MITM the thing. Install MITM and go to town. Unless the app does cert pinning.
Now we need to do MITM the hard way: find the certificate, generate a new certificate, and replace the old certificate. And you’re on your way! But that’s hard - so you could use apktool to delete the calls that enable pinning, which is the easy way to make money in this industry.
So what is an e-scooter app?
- Something that shows you where the scooters are.
- Very accurately.
- With unique identifiers.
- Something that lets you hire the scooter.
So what could go wrong? Well, when you hire a scooter, it disappears while it’s hired, and then reappears later when it’s available for hire. With a unique identifier. So what do you call this? A privacy violation with cheese. Because if you, say, ride a scooter home, it will reappear outside your home. If you ride to work, or a friend, you’ll be mapping your life.
Which is great, but surely we can get more. Of course we can! Although accounts are linked to phone numbers, which allows rate limiting, which makes it a bit hard to beat on the API to get lots of information. Unless you use throwaway SMS numbers. Which are available as a service.
Which allows you to get a world-wide map of where all the Lime scooters are in the world. Which is rather a lot of the world. Including, for example, the Suez canal (presumably being transported by boat). Which is a nice start, but it turns out that you can actually see the scooters which are in use; if you repeatedly poll an in-use scooter, the API will tell you where it is.
What could possibly go wrong?
(Other than international espionage.)
Well, what happens if you take an off-the-shelf scooter and replace the firmware? And stuck a GPS on it? And then wrote an app that’s just React. That’s awful, right? But it’s what Spin did. The app uses Bluetooth to unlock the scooter. You could use, say, TOTP to unlock the scooter in a sane, secure way. Right? Pfffft. You’d make the token unique and obfuscated, right?
Yeah, nah. You have an unlock token for the entire fleet, which consists of writing a sequence of bytes that will unlock any Spin scooter. And since Spin scooters don’t have a GSM connection to tell the server it’s been unlocker, you know have a Spin scooter. You shouldn’t do that of course.
So what’s the moral of this story.
- Capitalism is bad.
- You can make a lot of money with a shitty product.
- So maybe capitalism is good?
- (That is not the moral. The moral is up to you.)
vmpklōn - Creation of a VMProtect Clone
Jon works at Fireeye as a reverses engineer, but is not the author of The Art of Exploitation.
What is VMProtet?
Packs and mutates code, obfuscates strings, and can detect being run in a VM and debuggers.
Because the code is translated onto a stack based VM, the original x86 code can’t be retrieved; Jon thinks this is perhaps the most interesting aspect of VMProtect.
Stack Based Virtual Machine
A stack based virtual machine performs emulation via a stack machine architecture; common examples include the JVM and the .NET CIL/MSIL. Stack machines operate by executing on a stack rather, than on a conventional register-based procvessor.
VMProtect has existed for a long time, and after Jon heard people complaining about problems working with VMProtect Jon took it upon himself to make it better; this started with analysis of the current versions of VMProtect. Jon had a lot of questions trying to understand the current tool and how and why it worked, without any help from the author. While the author had published papers on VMProtect, it was unclear which versions the papers referred to, and the author was unresponsive to questions.
After a lot of frustration trying to blackbox VMProtect, Jon needed a break. The break was finding an old version of VMProtect courtesy of the Wayback machine, allowing him to start reverse engineering its functionality from a simple version that hadn’t accumulated a lot of junk code over time.
Wiriting a Disassembler
Using some well-known sample code and an older, baseline of VMProtect, Jon was able to analyse and understand how VMProtect translated a source
.exe file into the stack language of the VM, and from there to understand both the translation and how the VMProtect stack actually works, beginning with the VM setup.
Jon iterated over the translation, taking well-known x86 instructions in a source exe and gradually mapping up how VMProtect created its destination intermediate language from a arbitary source input.
(The intimate details of the disassembly work are, unfortunately a lot cleverer than me; this is not a comment on the quality of the presentation, which was perfectly lucid.)
Clones Don’t Always Turn Out Well
Jon’s clone probably wouldn’t survive in the wild. It has been a great learning exercise, but probably isn’t a sustainable, usable tool. And what did Jon learn?
- Software gets more complex over time.
- Microsoft Windows.
When people have a bit of code with a long history it’s easy to keep building on it, which is why Microsoft can, for example, ship mititgations for vulnerabilities so quickly; trying to catch up is really, really hard because you have to start from scratch and follow them on the same journey. But starting with an old version and following that journey can be really worthwhile, because you learn a huge amount by doing so.
Apathy and Arsenic: a Victorian Era lesson on fighting the surveillance state
Attacus is a first-timer at Kiwicon, but I’ve seen her speak at a number of linux.conf.au conferences, and she’s always been great to listen to.
Arsenic, she notes, has been used for many things since at least the times of the Greeks; for the purposes of this talk she will be talking about the use of arsenic as a poison. In the 1800s in England it was very cheap, and readily available as an industrial byproduct. It was used in all sorts of thing; some legitimate (poisoning vermin) and less so (make-up ingrediants). Oh, and colours like Paris Green, where it could end up poisoning people via paintings, playing cards, fabrics, or wallpaper.
If you take a lot of arsenic it acts like cholera: the worst shits you’ve ever had, and dying. In small amounts, it makes you ill, and then you die. In England, the seaside cure may in some part be attributed to people going from cities with arsenic leaching from the wallpaper to seaside towns where the walls were typically whitewashed. In the absence of arsenic, they would feel better.
Arsenic had a reputation as a quick way to make money, becoming known as “inheritance powder” for its efficacy in poising people via food or drink. But since we’re at a hacker conference, let’s talk about data: the pervasive surveillance by government and companies, and the poor storage and control, or active misuse of it.
Like arsenic, data gathering is understood to be bad by many experts. But like arsenic, it’s cheap and useful to provide services by dragnetting and exploiting data. And much like arsenic, mass surveillance and privacy invasion can have uneven effects; some people might get a little ill from arsenic, others might die. Some people might be the victims of targetted attacks.
Attacus illustrated a point with a story of a couple who married and had nine children; 7 died, which was not uncommon, and the family collected the insurance. Some time later, the husband died of gastric fever, just like the children. She re-married, but her second husband also died of gastric fever. And then her two remaining children from her first marriage. And the child of his third marriage. And her friend, but only after the friend had introduced her brother to Mary-Anne. After a few more households had died of gastric fever, it was uncovered that she had murdered a friend, 16 children, 4 husbands, and she died of rpoe poisoning.
Now you can make even more money with stolen data. Attacus shows us an email scam this year, based off cracked password DBs from insecure sites; by using an actual password from a breached site, the attacker convinces you to pay a ransom, claiming to have broken into your entire digital life. Based on research it appears that 70-odd bitcoin had been transferred based on this scam; one of the wallets appear to have over 900 bitcoins: millions of data.
The thing that makes this work is using a real password: it’s the kernel of truth that thte lie is built around.
Of course, most people in Victorian England didn’t need to live with a murderer to be affected by arsenic: the wallpapers and clothing would kill people or leaving them with debilitating illnesses they can never be explained. And like arsenic, pervasive data gathering can effect you even without living with a hacker. The rollout of Google Plus harmed many trans people whose identities were exposed in ways that were hazardous to them. Facebook’s mishandling of data has allowed malicious actors such as Cambridge Analytica to affect elections.
The problem is that people mostly don’t give a fuck, until we reach a point that Cory Doctorow has dubbed “peak indifference”, the tipping point where people move from not caring to caring a lot. This year may have been a tipping point: research has shown people are beginning to move away from Facebook for example, and the GDPR has kicked in and forced people to confront how much data is being hoovered up.
But how do we sustain this? We need a combination of awareness and resistence. In the case of arsenic, for example, awareness was helped in the 1850s when reliable tests for arsenic poisoning became available; around the same time John Snow developed a theory of cholera being spread, not by miasma, but by water, which led to people developing a better understanding of how harm could spread.
From here, resistence developed: people began publishing polemics against arsenic; many other countries began to ban arsenic in the 1860s, which flowed into England as well. Over time manufacturers were forced to offer arsenic-free options, more and more so as European manufacturers simply stopped offering arsenic-laced dyes.
Finally, accessible alternatives made it easy for people who didn’t particularly care to benefit from these changes. “So for those you who have read this far into the analogy, you’ll see where we’re going” - education efforts like crypto parties, VPN advertising and so on are a thing; resistence has ramped up: Apple have offered privacy as part of the marketing efforts, for example. Password managers are becoming more popular. The GDPR has kicked in.
The third thing, though, is time: eliminating arsenic took time. Patience is hard. Taking time is hard. Accept this won’t change overnight. Speak about this, boost people who speak it. Lobby law makers, not just in your own country, but in others. Help people. Push against legislation that will undermine privacy - and accept that this battle has to be fought over and over again. Challenge data gathering as a meaningless KPI. Attack advertising and spin by companies like Facebook.
“We have a responsbility to make the world better for people who don’t have the knowledge we do.”
“Cynicism is a lot easier than doing the right thing. Shitting on things gets you retweets, but keep trying.”
Another great Attacus talk.
A tool for automated smart contract symbolic vulnerability discovery and exploit synthesis
Caleb “alhazred” Anderson
How do you hack smart contracts? And how do you do it in the laziest way possible?
Etherium is interesting because of the smart contracts. Smart contracts are used for escrow, custom share-like tokens, or crazy things like the DAO (distributed Autonomous Organisations).
What could go wrong?
- The DSA: 3.6 million eth stolen.
- Parity multi-sig wallet theft (150,000 eth).
- Later on the same dudse lost ten times than.
- There’s a libertarian idea that we can have a bold new future where code is law.
- Hilarity ensues.
- The DAO hacker exploited a flaw in the code, and affected surprise this was considered theft.
- The libertarians disagreed and went to the police.
The DAO hack was a “re-entrent attack”, where the debits and credits were not a unit of work, updating the withdrawls multiple times, with the balance of an account not being updated.
Smart contracts run in a VM, the EVM. It’s an assembly-like language than can be compiled from other languages. Caleb’s objective is not to painstakingly go through smart contracts one by one, but rather to discover weaknesses and autogenerate exploits; the laziest way to do this is by bytecode analysis, looking for weak patterns.
Symbolic execution allows us to work out the constraints on inputs that will lead to a specific outcome: we can describe the inputs that will lead to the outcome we want: a formal definition of theft as Caleb puts it.
A simple algorithm for theft is to follow all branches of the contract, keeping track of the constraints on all the variables, looking for examples of outcomes where we make a profit. From this, Caleb implemented his own symbolic ethereum VM with an SMT sovler.
He then build a database of contracts to examine.
His SMT solver of choice was Microsoft’s Z3 solver (“The one good thing to come out of Microsoft”).
- A SAT solver with extra capabilities.
- Gives us constraint checking (is this situation possible), variable instantiations, and maximum values (what is the greatest value, i.e. profit, we can obtain).
This leads to Moriarty: an etherium VM which runs contracts, looking to solve for maximum profit. Contracts were solved in descending order of value, and also looking for re-use of elements (since if we can exploit re-used code we can pop many constracts at once). We are looking for contracts where it possible to create inputs which will result in the contract delivering more value to us than we need to put into it - that is, profit.
I want you, gentle reader, to imagine everything that can go wrong with a live demo: slides going wonky, code not running, terminals misbehaving. Now imagine you run out of power on stage.
Now imagine being a good enough presenter that you can pull it off. That’s Caleb.
The code works, but he’s not about to hand it out to every man and his dog (“look at the state of you”) as a result of what he calls the tickles of the “stump that is my conscience.”
Feeding the Beast: Network Insurgency
“I don’t want to talk to you about having done red teams, I want to talk to you about how you think about doing red teams.”
People complain all the time about not being able to find people with the right mindset to be red teamers - but is it possible to train people to become good red teamers; specifically how to think like red teamers. Sin did some research and found that there is already material in the wild.
What is red teaming? It’s threat modelling and adversary simulation - or anyway those are the parts that Syn wants to talk about. The ideal outcome is for blue teams to make better decisions.
Why use F3EAD (Training/Development, Team Co-ordination, Enabling Framework)? It’s not a toolset or what have you. It’s a process for helping you help your folks.
Social engineering is recognised as a big part of red teaming, but only the red team acting on others; this is a talk about socially engineering your red team so they can do beeter.
F3ead is a US SOF methodlogy and used for asymmetric warfare, blending intelligence and operations functions. “Find, Fix, Finish, Exploit, Analyse, Disseminate.”
An entirely fictional example about an entirely fictional mining company executive visiting an entirely fictional but extremely insecure country.
- Find: Although the exec had a locked-down media profile, his family leaked a lot of information about current locations and future travel plans. Also LinkedIn profiles make attackers jobs super-easy.
- Fix: Visiting the remote mining complex and understand what the location looked like on the ground. There was a local cleaning company that did all the cleaning work across the site; moreover, there were contractors constantly coming and going. People assumed everyone they met was a contracter. The receptionist was over dealing with passes and visitor logs, and would just wave things through via a secure switch on the desk.
- In spite of Wifi networks being forbidden, one existed anyway. Unsecured. With a lot of confidential IP on it.
- Cleaners were able to pick up the security plan for the rubbish.
- Exploit: Once the wifi had been found, developer portals and confidential documents were found.
This iterates around to perform more investigation and more rounds of FFF. By working with off-site analysts, the on-the-ground team were able to send material back home to get feedback on what to go after next.
metl noted Caleb had some elements of his talk which were not OK, and the Crüe are addressing that.
A slightly shorter lunch as a result of the need to fix up the sound system earlier in the day is in order. Well worth it in order to hear a damn thing.