Kawaiicon 3 - Day 2

Luring in the potential late starters with a demo of a reconstructed Enigma machine seems like a good gamble; with a little admin to open things - pick up your merch! - we swing into the program. There is a certain amount of stick for Bogdan heading to Sydney to see Oasis this weekend.

The Enigma Machine—live demo’d and explained

Mike Pritchard

Metl makes a special guest appearance to introduce the Enigma machine talk; “we invented computers to hack this machine”. When metl found bad crypto he used to make the joke that “the last person who got hacked by bad crypto was Hitler“; while this is probably an exaggeration, actual bad crypto is relatively rare compared to bad implementations of crypto. Metl is thrilled to see Mike’s collection of devices, and to have Mike here to speak about them.

Mike comes on: “this is a bit terrifying.” Mike has been building this collection because it resonates with him; he started by calling it a cyber-espionage museum, but his son convinced him that it was a terrible name, because normal people will not understand it—but that also sparked a thought for Mike: how hard it is to explain the adversarial world of security to people who aren’t used to it. Many of the industries “that we protect” have museums—of banking, of electricity, of cars, whatever—but there is no museum of spying and security. Without that the history and culture will be lost; Mike wants to build a museum to preserve those things. He would like you to follow his museum accounts so people can simply be aware it exists.

The collection has a number of areas: Cultural, including propaganda and writing on cyphers; the Cold War, a time in history when spying was the centre of conflict; Tradecraft and the associated tools, talking about how spies operated and some of the tools of the pre-digital era; (Birth of) Cyber.

Prior to Enigma, the Confederacy has an Enigma-like solution, but it was so poorly managed that it was constantly breached. World War I saw those same ideas carried forward, with early multi-disk cylinders and the Luganne Sphinx. The Sphinx, Mike notes, often appears due to the role of Champollion in cracking hieroglyphs and the resulting explosion of mathematical analysis of language in France.

Toward the end of WW I there was a burst of parallel invention by Edward Hebern, a horse thief, in the US; Arvin Damm in Sweden; and Arthur Scherbius in Germany. All of them came up with electro-mechanical cipher machines; Mike mentions that “adding electricity to things” was popular quakery towards the end of the 19th century, but all three of these were legitimate inventions. Mike also notes that while Scherbius is credited for the Enigma, the actual idea came from two Dutch naval officers who were prevented from discussing it during the war. When they were finally allowed to develop it, their patents had been claimed by Scherbius.

Early versions of the Enogma integrated a printer, and these were never really reliable. When the printer was finally discarded, significant progress was made on the reliability was made, and multiple iterations were produced throughout the 20s and 30s, with commercial and military editions.

Each rotor in the Enigma has electrical contacts which contact signals from one side of the rotor to the other; the stack of rotors, and how they wiring maps as the rotors are adjusted, ensure that a letter on they keyboard will never generate the actual letter, only shift it to another. So no matter how often you strike the letter ‘o’, for example, the wiring and the electrical reflector in the signal path will ensure that you never ever get a letter ‘o’ back. Each of the three rotors is numbered, and the rotor arrangement in the machine drives the encryption. Each rotor’s wiring is unique to the number of the rotor, but all rotors with the same number have the same wiring. So if you seize the rotor numbered “2” from a German submarine, you now know what the wiring scheme is for every number 2 rotor in the world.

The configuration of the machines is driven by a documented process: every day, the rotors are changed, and every day has a set of rotors, organised in a consistent way. So on the 23rd of October 1944, three rotors were inserted, with a consistent offset. On top of that, there are a set of plugs which add further modification to the signal from the rotors; up to ten cables are also given a different combination each day.

These instruction sheets were closely guarded—they were not supposed to be moved about on aircraft, for example, due to the risk of a plane being sot down and the code books being captured.

As you type messages, each letter causes current to flow, and lights up a letter which is guaranteed to be different to the input; to decrypt these messages, you need to know the date and time that they were generated, along with the initial rotor position. The Germans were so confidence that the system was so complex that the initial position could be shared without risk. That turned out to be wrong, of course. Each message had its own secret, as well, which meant that there was a unique component independent of the common wiring and rotor settings. That is decrypted by the daily settings, and can then be applied to decrypting the actual message: they change the rotor settings to that three-letter per-message key, not the daily setting. The wiring remains the same.

That’s an important point: the metadata uses the common scheme for the message key. The message requires the decryption from the key in the metadata. Mike notes, though, that just the metadata became incredibly important, and the analysis of who was sending messages to whom, and when that was happening, became a whole area of activity in and of itself during the war.

The famous bombe machine, on the other hand, was using the daily settings to search all the possible three-letter per-message keys for a given day, to allow decryption of the messages sent that day.

Enigma’s post-war influence was pervasive, with both US and USSR intelligence orgs building with the basic electromechanical principles, but looking at ways to work around the limitations. Over the course of the cold war, one of the key problems to solve was the key material: having static, widely distributed key material was the weakness of the Enigma. Mike describes the development of the numbers station: an agent will receive a one-time pad from a dead drop, which will be unique to me. They can then tune into a radio station that is just a repeating set of numbers read aloud - the numbers station. The Cold War encryption machine, then, can be given key material that is transmitted via a four number group which is decrypted with the one time pad, the pad is burned, and the key material is entered into the agent’s machine. As well as being unique to the agent, this has the benefit of being deniable: you know that numbers are being broadcast, but you have no way of knowing who, if anyone, is listening to those key material broadcasts.

This was a really good talk - Mike is one of those speakers who has the knack of unrolling a complex topic in a way that makes it feel very comprehensible.

Reversing Bytecode into Bounties: Uncovering Vulnerabilities in Jira and Confluence Plugins

Giuliana & Jamal

Giuliana and Jamal both work in security at Atlassian. Giuliana notes that the presentation will be covering the self-hosted version (DC - data centre edition). The DC plug-ins are downloadable Java applications which run alongside the main product, sharing a database and integrating with the APIs. Cloud editions - the SaaS versions - have a different architecture for plugins.

Because customers have had their installations compromised, Atlassian have had to do more analysis on plugins; at the end of the day, customers hold Atlassian responsible when things go wrong. Giuliana notes that there is a bug bounty program, and that if she’d been doing this work she’d have pulled down $80k a year in bug bounties. Regrettably this still would take 25 years to buy a shit-tier house in Sydney!

“X-ray Test Management for Jira“ and “Script Management“ were two of the first candidates, because their scope is “RCE as a service”. Because the plugins are jar files, you can download them and pull them apart via unzipping and decompiling. From there, Jamal can use Atlassian’s standard white box security audit tools.

Jamal notes that different decompilers will get you different source output, which can give you different findings from the code analysis. One tool Giuliana is fond of is semgrep, an open source tool that gives you a language-aware grep to hunt for suspicious code. In this case, one of the things that Giuliana is looking for is XXE - embedding commands in XML via external entities that will let you attack sensitive files.

Giuliana the walks us through, using her and Jamal’s sample spreadsheet plugin, how the code, the standard XML config file, can work together in a plugin to create these vulnerabilities.

As an aside, Giuliana notes that there are authorisation annotations available in the plugin documentation and samples, so if you see those annotations in a plugin’s code, it’s probably reasonably well-developed. If, on the other hand, they’re absent to a large degree, that’s probably a bad smell for security.

Jamal asks the question, “this wouldn’t happen in the real world, right? This is just our deliberately broken sample app? Right?”

“Anyway,” Giuliana cuts in, “we have scanners in the marketplace uploads to try to find obvious problems, vulnerabilities in dependencies, malware. Or you could just move to the cloud where we make more money! But seriously, the cloud systems have proper sandboxing and controls on the cloud apps than in the DC plugins.”

The bug bounty program is currently private, but you can sign up and there’s not a lot of competition at the moment, so you should give it a try.

let’s make malware but it might get caught so the malware gets worse

alex

alex helps organise purplecon, which is different from many security conferences, although not so much from Kawaiicon. alex has worked at Atlassian, but currently has their own company where they help hackers communicate better. “But enough about work”, let‘s talk about malware. Have you ever written software so bad that it could be considered malware?

(I would suggest that the answer would be, “who amongst us has not?”)

So what is malware? “A Steam game where you download games without downloading malware”. But the short version is it’s software that lets you remotely control other people’s computer: a common example might be to get around MFA or VPN controls—get the browser onto someone else’s computer so you can piggyback onto their access to an account or even a whole other network.

Could we use ssh? No-one has publicly routable ssh on their laptop. “Actually some of you little freaks probably do and I do not want to hear about it”. Instead, we could deploy malware that connects back to another server, controlled by us (usually called a C2 server), that we can proxy commands through. You don’t even have to write your own! You can just go to GitHub and download it! There’s lots of malware kits and tools on GitHub that claims to be for red teams. Easy, right?

But that’s the easy part. Isn’t the hard part getting on the computer. Well, alex says ”social engineering isn’t that hard, actually” and flashes up a GitHub repo that asks you to run shells on your laptop. Which, sigh, not wrong. But! It can be difficult to get it to persist. After all, if you’ve got people to run malware in a shell, you stop it when the terminal closes—unless you use & disown in which case it stays around forever!

Well until they restart their laptop. Perhaps you get set it to start up from somewhere. For example, nvm adds itself to the .zprofile file. So if you were to piggyback on the script it calls, nvm.sh well it’s gonna start every time a shall opens, right?

But what happens when the cyber police show up? What happens if all your malware copies are talking to your C2 server? That’s gonna be real obvious, right? Well, that’s where you have a pile of reverse proxies hosted somewhere. You could put them on AWS but they might rat you out. You might have to look for more specialised hosting services for your gang of proxies.

If you’re trying to avoid being detected in the first place, though, you only deploy it targeted to a subset of people, people who you want to target? You can use a staged loader, which only selectively downloads. The loader is hosted publicly, but the malware itself only gets pulled from a dedicated malware hosting environments. Some people go so far as to rely on magic user agents get the malware, and everyone else gets a different payload.

Of course, once you start thinking about this, there are so many other problems. Should it be .zprofile or a VS Code extension? So what’s the solution? To touch grass!

From Plug to Pwn

Kylie McDevitt

Kylie has been coming to Kiwicon/Kawaiicon for about ten years, but this is the first time she’s come on stage. Kylie’s day job is in firmware and embedded systems security, and she was able to compete in Pwn2Own, a premier exploit competition; vendors submit real devices, and competitors do live exploitation on stage, with timers and an audience. Kylie and her team had three exploits for devices, and they decided to have a go.

Kylie notes that the reality of competing has some fishhooks. One was that they simply ignored the official submission system; it wasn’t until they had a contact via a social media catchup that they could get an entry. It’s also a race with vendors—many of them seem to patch in the lead-up, and some researchers wonder if the vendors watch crash-dumps from the intense research activity in the period before the competition. You should be aware, as well, that the draw order matters a huge amount. If you’re the first person to exploit, you get a full prize, but if you go after the first person, even if you exploit with a really novel, high-quality exploit, you will get a much smaller prize. Since they were drawn 11th, they only got very small payouts even for their successes.

Kylie is focusing, for this talk, on the Sonos Era 300 smart speaker, where the firmware was very difficult to access, puts you in a pool with fewer competitors. It was the hardest device, with the biggest payoff, and had the best secured firmware:

  • Encrypted flash.
  • Secure boot enabled.
  • Keys in OTP with fuses, so they can’t be dumped with the firmware.

Sonos have been in pwn2own for a while, so the team went to read about past breaches: a common theme was that getting the firmware is the challenge; the software itself has a lot of functionality, so a lot of vulnerability. There were a lot of tricks to pull the firmware, so Kylie’s team decided to start with those techniques, and adapt them as needed.

The device has a read-only UART to see the boot logs; the flash chip then had a set of very fine wires added to the pins for the hacking. For safety’s sake, they bypassed the onboard power supply with a low-voltage external power supply to reduce the risk of being exposed to mains voltage.

From here success was chaining together a set of small bugs: a u-boot misconfiguration allowed control over the boot, which led to a kernel offset bug, which then led to a secure boot loader bug, which then allowed the keys to be exposed during the boot process. From there, the team could get the decrypted firmware. One part of the exploit relies on an audio codec bug, so part of the exploit is to play audio to “groom the heap”.

Kylie starts the demo, noting that with only three minutes this is actually more challenging than pwn2own time-wise. We listen to the beats with bated breath and… it works! Shell achieved! From here you could install malware, use the microphone to listen to conversations,

(The song is, of course, Together Forever.)

What Permaculture Design Taught Me About Security

Laura Wrattern

What is permaculture? “Perma“ is permanent; ”culture” could cover many cultures, but in this context is focused mainly on agriculture, but covers any system that involves people. The term and early versions were developed in Tasmania in the 1970s.

If permaculture covers systems with people then we can see them in terms of housing, gardens, or farms, but we can also see how there might be parallels to security systems.

There are three core ethics for permaculture design:

  • Earth care - caring for the earth..
  • People care - doing the least harm to others.
  • Fair share - share skills and knowledge to allow more people to thrive, and to decide what “enough” is.

There are twelve permaculture principles derived from these. Is that a lot? But you don’t need them all for everything. Laura wants to focus on three, illustrating with examples for her garden:

Observe & Interact: Assess things before taking interaction. Make sure than you understand how the system works now, before you begin changing it. This is also the basis of good science: you need to be able to understand if your changes have had the intended effects. In the case of the garden, Laura spent 9 months observing her new section before selecting the right place for her chicken coop. The result is a location for the coop that suits both the chickens needs, and Laura’s as well.

Integrate rather than Segregate: The neighbours liked Laura’s chickens so much they wanted to get some. They were going to build their own coop, but instead put another door in the existing coop and making room for the neighbour’s chickens.

Use Small and Slow Solutions: This is analogous to the idea of the MVP: build the simplest thing that will solve one facet of a problem, rather than trying to solve everything all at once.

Laura talks about adding SSO to Canva:

Observe & Interact: Laura looked at the current design and approach to the problem that was underway before she was invited to be involved. That knowledge built a better relationship with the engineers, so that when she provided feedback, it was material to the existing work.

Integrate rather than Segregate: Embedding in the team is a much better approach than a last-minute review. By joining early, and working through the whole life of the work leads to a much higher-quality outcome.

Use Small and Slow Solutions: When they started added mitigations to solve problems with the first design, they kept making it more complicated. Laura and the engineering team decided to go back and look at whether they could choose alternative designs to get the same results with less complexity—and they could.

Laura does have one final ask: can we settle a dispute between Laura and Jamal? Should we get goats?

Disregard Buzzwords: Effectively Building Secure Stuff

Adrian Hayes

Adrian works for Pulse Security, and has spent about fifteen years doing penetration testing, and he would like to share his reckons on how to build secure stuff off the back of that; as a pen testing company, they have a lot of sensitive stuff, and building their own production network was a chance to test some of their own recommendations.

  • There are three types of workload: external facing workloads; internal facing workloads; and ephemeral workloads.

There are three concepts:

  • Security Controls: concrete and fuzzy, Concrete are mature, predictable, easy to understand, and hard to get wrong (for example, packet filtering on a firewall); fuzzy a novel, new, hard to understand or reason about (for example, machine learning, security awareness training). They’re both useful, but it’s important to understand their strengths and weaknesses—and you can’t afford to build everything.
  • Reasonability: not “is it reasonable” but “can you reason about it?”. You can’t secure what you don’t understand. Now, Adrian allows that this might be slightly hyperbolic, but he doesn’t think it’s wrong. Are the things we put in things that we understand? If we put something new in, do we put it in everywhere, which might be a big change, or in some places, which isolates our new workflows, but is creates a complex landscape of carve outs.
  • Observability: How do we know what’s going on? We can look at a workload in isolation To understand what’s going on with a new thing, or even and old thing, in order to be confident we know what it’s doing.

Another concept Adrian wants to discuss is attack surface management: the bigger the scope of the pen test, the more problems you’ll find. In the real world, what does that mean? Well, if you can’t touch something, you can’t hack it. So the fewer things that you expose, the harder it is to damage things. He offers an example of working on an audit of an under-construction ship, where the guest network had components in common with the ship management network, which allowed him to discover paths to the network that controls, for example, the engines. Ideally it would be impossible to get from one to the other, so have nothing in common, or to use egress filtering to stop the traffic even getting out.

My immediate reaction was that this was a good talk, but the more I think about it, the more I like it. One of the things that I don’t much like around the offensive side of the security world is that—on the whole—it’s long on people throwing rocks about how broken everything is, and short on anything useful about fixing things or what you can do to protect yourself, or at least mitigate risks. So for me it’s really gratifying to see someone from an offensive background share their suggestions in a real, practical way to do things; moreover, Adrian’s advice is something that would be very usable for someone who doesn’t have a huge team to build and support things. It would, if I were a sysadmin at a small company wondering how I’m supposed to keep anything safe, feel tremendously encouraging to see that a skilled security professional is recommending some concrete tools and techniques that are fairly straightforward: a few network zones, easy-to-grasp tooling for segregation and enforcement, and so on.

Imagine how you’d feel if you saw the suggestion that you should have a dozen zones just for your home, and do research on things you put in them to be properly responsible, as one talk stated—you’d feel hopeless, defeated before you even started. But Adrian’s talk would give you the sense that good things are possible for everyone. It’s a hands-on talk that I think went well with the higher-concept talks by Bec, Elle, and Laura: doing what we can to make things better, rather than being overwhelmed or overbuilding.

That’s All Folks

I’m afraid at this point I was conferenced out. I took a break to hang out with a new colleague and a friend, then came back for the thank-yous to sponsors, the prize giving for the contests, the well-deserved win for Bullshit Jobs in Infosec as the best talk, and the hard-earned applause for the organisers and volunteers. A little time at the drinks rounded things out before my capacity to cope with many people sent me home.