# Kiwicon 11 Day 2 Morning

Last night was rounded out nicely by a visit to (thelibrary.co.nz/)[the Library], which offered a civilised volume, a nice mocktail, and some delicious cheeses. What more could a man ask for? Well, I guess I can think of several things, starting with the winning Lotto numbers, but they’re not on offer. After trundling off to bed at a sensible hour, I’m ready to leap once more into the breach, dear friends.

The day commences with a reading from Machines of Loving Grace and some administrivia, including mentioning the sponsors. Which metl probably should have done yesterday.

## Moving Fast and Securing Things

Kelly Ann

aka “How to appsec without being a Fascist.”

Kelly is a product security engineer at Slack. Used to be an eco-pirate with Sea Shepard.

### Slack

“Slack is a collaboration app for teams”; it’s more than a chat app for many people, it’s the core of some organisations. It grew quickly, and was understaffed in the security area. People use slack a lot (8 million connected users) and a breach would have serious ramifications. There are a number of things to secure, too, and about 700 Engineers, 400 app engineers, and only 30 Security engineers, and 5 product security engineers.

CI/CD is the culture, and pushes to production are easy; deploys happen several times daily. At peak deploy to prod happens 100-150 times a day - you shouldn’t do that, and Slack don’t any more, either. But the SDL (secure dev) process was designed to cope with it. Moreover, there are no roadblocks from security.

Devs anbd security shouldn’t be in opposition, but all too often are.

### The Security Develop Lifecycle

The SDL is a set of processes that guide developers though security concerns; the phrase originated with Microsoft, and scares developers because it sounds too painful. The SDL doesn’t need to be a boot stamping on your face, forever.

Slack learned from the Secure Windows Initiative, adapting the best parts of the SDL to a fast-moving org. THe key lesson was to teach devs how to think about security and incorporate it into day-to-day development. So how does this work in practise:

• Transparency: A key element of how things work at Slack, using Slack, is to use product-centered channels. The channel is its own documentation and preserve the context.
• Trust the developers: Be the bungie cord you wish to see in the world! You need to set the engineers up for success.
• Empath: Set people up for success. Hold security office hours! Foster mutual trust. Don’t create adversaial, punitive relationships.
• Dedicated, open consulting channels and triage conversations.
• Keep discussions in the open.
• Training for new engineers and ongoing training. A “Hacktober CTF”.

### Self-Service SDL

The goals of /go SDL are:

• Make security approachable and easy.
• The process needs to be easy.
• We need to minimise interruptions for both developers and the security team.
• Two Jira tickets are generated; one for the devs and one for the process.
• The command pops up a form capturing information about the project.
• A description, code repo, Jira epic id, and so on.
• From there the users are guided through a risk assessment that minimises the number of questions that need to be answered.
• The questions start at the high risk, and work down to low risk ones.
• The questionaire captures the characteristics of the project: an app? An integration? Is there new infra? Is there PII?

Checklists save lives. No, really.

• Humans are really bad at their own routines. We forget and skip steps.
• The Checklist Manifesto is recommended reading. It started in the aviation industry; performing pre-flight checklist reduces crashes.
• The WHO created a system of checklist for surgery.
• Doctors who use the checklist have a 50% lower chance of killing their patient.
• Perhaps we can learn something from this.
• Once the questionaire is answered, a security checklist is created and a link put in the project slack channel.
• The triage channel exists so devs can ask questions relating to the checklist.
• As devs work through the checklist, which is tailored to thing being created:
• Are you using the vault?
• Are you using the built-by-slack infra?
• If the developers have finished the checklistm, they’re done. Their part of the process is over.
• The security team can then perform a security assessment: the team have everything they need as a result of the prior process; code, documentation, environments, and so on.
• Feature/product/project channels allow conversations to happen, but also retained. Security folks are summoned with an @ and people are in the habit of reading the backscroll to understand the context.
• There’s an ongoing process of collecting feedback to improve the process.
• Feedback is formal, or informal feedback.
• Some is unsolicited via e.g. commit messages.
• The bug bounty is used as a source of feedback.

### What’s Next

• Open source the SDL tool: done! You can find it (https://github.com/slackhq/goSDL)[on Github].
• Do more early on in the architecture team.
• Rewrite as a Slack integration rather than a web application.

## Cyber Defence Exercises - how to make it cool?

Raimo Peterson

• Works with the NATO CCDCOE - the Co-operative Cybersecurity Defence Centre Of Excellent.
• Table Top Exercises (TTX) vs. Technical Exercises.
• Military and Community Driven. The community includes NATO countries, including Banks or similar organisations.

“We don’t have right answers and wrong answers. Everything that works is right, and everthing that doesn’t work is wrong. This is the difference between classrooms and exercises.”

Locked Shields 2018 - the world’s largest live fire defence exercise. The scenatio is major attacks on a fictional nation (Berylia), droines flying off course, power grid disruptions, water plant contamination, thousands of virtual systems and attacks, 20 nations working together. Locked Shields has been run since 2010; Prime Ministers and Defence Ministers participate in these exercises. Over 1,000 people participate.

Key success factors:

• International.
• Full scale & technical.
• Live fire. This critical.
• Red vs Blue; red only attacks, blue only defends.
• Game based; people are put into fictional roles.
• Innovative. Teams can test out new methods if they want, including ones that they can’t test elsewhere.
• A Cyber Range Environment.
• Defense Oriented.
• Friendly competition - absolutely critical. Everyone wants to win. Everyone wants to better than the neighbour.

• Blue team: the defenders.

• Green team: build out the infra and the scenarios for the exercise. Six months of work. Volunteers from all over the world.

• Red team: the red team must conduct attacks that test everyone equally and thoroughly.

• White team: Manage the exercise, ensring it remains a good learning exercise.

• Yellow team: situational awareness.

Visualisations:

• For the blue teams the visualisations aren’t so important.
• But for the observers they’re crucial.
• There is a tremendous amount of work that has gone into visualisations: 3D printing, blinkenlights, drones, and so on.
• The visualisations are critical for buy-in from observers: politicians, senior millitary staff, and so on.
• However, you must be careful they don’t become counter-productive for learning for the blue teams.

Business IT vs Special Systems; there are challenges:

• Standard systems like Windows/Linux servers are easy to virtualise and scale as you need.
• Many highly special systems which are not amenable to this approach.
• e.g. substation controller systems.
• Drone controllers.
• Fighter aircraft.

## Getting Shells from JavaScript

Offensive JavaScript techniques for red teamers

Dylan Ayrey and Christian ‘xntrik’ Frichot

JavaScript (and the DOM) can do a lot of stuff. With more and more features all the time, which makes browsers a wonderful target. Actions can be performed cross-origin, in spite of the same origin policy (SOP).

Back in the 2000s it was common to be able to use a single SQLi type vulnerability to dump databases and so on. You only need to find one terrible vulnerability and you win everything; in the modern day we can use malware frameorks to find collections of small vulnerabilities, and then move laterally to escalate.

• Library of RCE exploits.
• Typically used post-exploitation.
• Some exploits HTTP.
• Some are constrained by CSRF but red teamers don’t care because by then they own the endpoint.

So what is the status quo? “The Lobster security fallacy”- hard on the outside, soft on the inside. So really getting a foothold is the most important step; if you can get inside the perimeter with, for example, a stolen password, you’ve done the hard work, via lateral movement.

So client side attacks can feel kinda lame, but BeEF turned JS attacks on their head. 30% of people will click a link their email. Only 12% will open an attachment. What damage can clicking on a link cause? Well, browser exploits are getting less common and harder to exploit; but they are everywhere. If you know how to do interesting things with JS and the DOM, you can do good things. Most people open their browser, they have many tabs left open for a long time, and the browser will connect to local networks, the Internet, the loopback interface… it’s a bridge to everywhere.

• Bad website forces user to sweep internal network.
• Exploit what you find.

So how do we do this?

• First we need to get the local IP via the JS/Dom.
• WebRTC makes this easy.
• What does the internal network look like?
• Network scanners in JavaScript are very, very common.
• You can use ping times to start mapping the network.
• From there you escalate to port scanning.
• From there you can start exploiting systems.

• You could just blindly run a bunch of payloads against everything.
• SOP should stop this. You aren’t supposed to have a client fetch data from site B and the behest of site A.
• SOP doesn’t apply to “simple HTTP”, but to anything complex - including requests used by JavaScript.
• The thing is SOP doesn’t really stop anything.
• CSRF.
• Typically a simple request - for example, fetching an image from a CDN.
• Unfortunately the sendBeacon feature allowed you to make complex requests simple. It’s insecure by default.
• CSRF can bypass NAT.
• Other users are insecure.
• Your internal servers are insecure.
• Blind XSS are still possible.
• You can do anything the user can do on a page.
• Steal data.
• And it’s still on top of Hackerone’s triage input.
• It’s incredibly common. It’s still more than double the next highest rank.
• Imagine, for example, reflected XFF on Jenkins.
• rXSS can lead directly to RCE.
• BeEF has its limitations.
• So, recon.

• If you know what’s inside it would be a lot easier to use some of these techniques when you may only have a narrow window.
• The free versions of many commercial security tools (e.g. PassiveTotal, Rapid7) or their public dumps may disclose host names, certificate names (via cert transparency), host names embedded in mobile apps, hoovering up DNS probes.
• All this can add up to a map of an internal network.
• Try it out on Netflix (with permission).

• This gets about 500 second level domains for Netflix.
• Starting with Netflix.com gets a few thousand domains (from Project sonar, Cert transparency, and so on).
• Pointers to Puppet, Confluence, Wikis, build servers, and so on and so forth.
• You can look at their open source tools to make it easier to find the known bugs and problems.
• There may be multiple copies - e.g. more than one Jenkins server in the ord.
• Jenkins has a long hiostory of XSS vulnerabilities, and they’ve always been treated as low priority.
• Using the techniques described, use WebRTC to launch a BeEF attack from the victim browser to spray known rXSS attacks across the /24 space the client IP is connected to.
• DNS rebinding can be used to trick the browser into disabling its CORS protections by confusing it as to what’s internal and what’s external.
• Can we do better than Spray and Pray?
• You can do DNS rebind to CNAMEs.

All of these means that seemingly trivial vulnerabilities in not on the Internet are can escalate to serious breaches. Not only that but combined with ServiceWorkers we can gain serious persistence and access - up to 30 minutes after a browser tab is closed, for example.

Also: please support open source software that you use. Provide financial support, patches, whatver makes sense.

## Overwatch Cyber-Espionage Tool

Wayne

Overwatch Offensive is an espionage tool; it’s an implant which is intended to perform espionage. Wayne says he’s “not a programmer” but this is obviously not true any more.

The journey from script kiddie to professional is the journey from simply using other peoples’ tools, to modifying them to do what you want, through to writing your own. Wayne has spent a lot of time looking at the tools in the Vault 7 leaks to understand what they do and how they do it.

Making my own tool is “My digital Everest”, and to produce something that’s state sponsered adversay quality; build a long-term digital espionage tool.

Wishlist:

• Easy access.
• Quick setup.
• Secure communication.
• Dashboard view.
• Easy to operate - “I had brain surgery a few years ago and if it happens again I want to be able to operate it. It’s planning for the future”.
• Database.
• Scalable.
• User land - no admin required.
• Audio and screen capture.
• Kelogger.
• Memory based.
• Secure communication.

Work started on Sunday, 15th of October 2017. Day 1 - “stop being a windbag and get on with it”. Wayne sketched out an overview:

• RESTful API. “I’ve tested these before, but I’ve never written one.”
• Ubuntu.
• PostgreSQL.
• WebUI.
• C++.
• Microsoft C++ REST API for the client.
• Callback period.
• Stored in a DB.
• Examine results in a WebUI.
• Transport
• Fetch and respond to tasking.
• Encryption.
• JWT for auth.
• Operational window.
• Define when a an implant is active.
• UI
• Oversee all this.

That’s a lot of work… maybe it would be easier to “get a job at the CIA and steal theirs.” Luckily the Vault 7 leaks include the CIA guide on how to write hacking tools, which was a great how-to of things to do, and not to do. One challenge was ubiquitous encryption and tokenisation of all data (and apparently no swears in the code).

Wayne was haunted day after day by a red line in Visual Studio that stopped his code working. After 7 months the answer emerged: compile statically. This was a great feeling! But after the euphoria wore off, it turned out that there was so much more work to do: