# B-Sides Wellington Day 1

At the end of Kiwicon 10 the Crue decided they needed a break from organising the beast - a multimedia extravanganza that catered to a couple of thousand people. In light of that much-deserved rest, some public-spirited folks stepped up to organise B-Sides Wellington to give us a security conference in Wellington.

(Things they, and hence I, learned: apparently it’s easy to get permission to fly a drone aroud parliament. It’s also really easy to have seagulls attack them.)

## Communication - an underrated tool in the infosec revolution

Katie Ledoux @kledoux

Katie is on the internal security team for Rapid 7. “We’re at an exciting time for infoec right now. We’re in the boardroom now. We may even get the funding to do our jobs properly.”

“If we are in the middle of an infosec revolution, then we are the revolutionaries. And I am here to suggest communication can be a useful tool in our arsenal.”

### Invisible InfoSec Team

• Katie did a survey of people across industries about their experience with infosec teams.
• Most people had never directly interacted with their team; in big companies some people believed they didn’t have one.
• Maybe you should make improving the quantity and quality of communication with your organisation in place for this year.

Some ideas:

1. Hang out with your co-workers in real life? Katie noted the first time she suggested this, she literally got booed. So maybe, maybe not. Personally I’m not a fan, although not to the point of booing1.
2. Templates: do you have standard ways of communicating with the rest of the org for security alerts? If you work on templates, or improving templates, you can improve the quality as well as quanitity of what you send.
3. Documentation: what do you want to make it easy for people to talk to you about? Is it easy for people outside your team to understand how to talk to your team?
4. Demos: in Rapid 7 the security team are part of the product team, so are in the habit of Agile-style demos. Can you do something similar? Lunch sessions? Presentations?
5. Chat applications: are you on IRC/Slack/whatever? Does your team have a public channel to chat? Do you participate in other people’s channels?

Make sure you measure whether the things you’re trying are actually working: define what success looks like, and continually test what is and isn’t working, and adjust accordingly.

“We should be approaching communication like we approach anything else: vulnerability management for example.”

It’s also easier if you hire communicators.

### Ye Olde Talent Gap

Infosec thought leaders - “I am a self-appointed thought leader” - like to talk about a talent gap. Katie posits that one of the reasons for the talent gap in infosec is that we have an overly-narrow idea of what a security professional looks like: we focus heavily on the technical aspect of the job.

This leads us to have security engineers trying to do things they may be bad at - like presenting a security policy to a wide audience, for example - while discounting the infosec value of people who are not hardcore techies.

Katie has some stats around this: recruiters are trying to hire communication and analytics skils, while people looking for infosec work on the hands-on skills. This is exacerbated by a tendancy to mock people with broader skillsets (Katie cites, for example, the mockery of people with BAs); so long as a people perpetuate a culture of hostility to people who don’t look like Mr Robot, it won’t get better.

### Our Whole Lives?!

Katie cites work by Claire Tills, who has spent a lot of time working on security messaging: in particular, people in general, not merely in infosec, respond poorly to scare stories. Trying to frighten people into compliance doesn’t work that well. An example is encouraging people to use sunscreen: experimentally exposing one group of people to a scare campaign and another to positive messages resulted in higher use of sunscreen by the group with positive messages.

The blame culture, with the language of blame, is incredibly unhelpful.

• When you attack individuals, rather than understanding the context that leads people to behave in a particular way.
• You should team up with people to understand why incidents have occurred.
• It’s not “you fucked up”, it’s “something bad happened to us and we need to understand why so we can improve things.
• “Don’t git-blame, git-solutions”.

Listening: yes, try asking questions and actually listen to the answers instead of merely waiting for our turn to talk. When people tell you they don’t want, or feel that they can’t do a thing, you need to understand wheir point of view, their drivers. If the conversation leads them to come to the conclusion as you, great! But you shouldn’t see this as just another way to get people to agree with you; you should be prepared to learn that someone has different risk profiles and drivers to you.

The more opinions you listen to, the more likely you are to understand the right answers.

## Confessions of a Red Teamer

Pipes

After years of organising Kiwicon and being too busy behind the scenes to be part of the conference as such, “it’s pretty awesome to actually speak at a conference.” “This is a talk about how to make my job harder.”

• Threat modelling is important. We love to talk about threat modelling and understanding risk, but we do it really badly. You need to be realistic about your threat2.
• Red teams aren’t special forces, “even though we like to think we are.”

Fundamentally, attackers want to get creds and own stuff. The red team’s job is basically pinball: the attacker needs to get the ball on the table, and then flip it around the table, looking for ramps and multipliers and multiball. So if you’re a defender, your job is to make the table harder and harder.

And like pinball, the attacker has to spend something to play: using tools, time, money, techniques to get into the network. “Attackers have bosses and budgets too” - Phil Venables. If you can burn their time and money without them winning, they’ll probably move on to the next target. “Safety is achieved when attacker cost exceeds the value”- Dino Daizovi.

This goes well beyond patching.

Change the game - make people change their play book. This drives up cost, time, and most importantly, risks. When attackers use more risky techniques - new, novel, poorly understood - they’re more likely to make mistakes, and then attackers get caught.

Frustration is the thing you don’t hear about: attackers get frustrated when things don’t work right.

### MFA

• MFA all the things. Proper MFA. MFA all the things, don’t MFA your on-prem and not your cloud (or vice-versa).
• Non-phisable MFA is ideal; a YubiKey is better than SMS, for example. But pipes is a strong advocate for the idea that something is better than nothing. SMS is a no-brainer if that’s all you can roll out. If you do a number portability attack, for example, there is a non-zero change the victim will quickly notice the compromise. The CEO ringing the helpdesk to find out why her phone suddenly stopped working should ring alarm bells.
• MFA is a great distributed alerting system. Ever user in the org might notice you trying to attack via their credentials and report it. pipes’ heart drops any time he launches an application and sees it hang, because he knows it’s hanging because somewhere, a user is looking at an MFA prompt and (hopefully) alerting their security team.

### Restrict Operating Environments

• Locking down endpoints is not the only concern - consider applications and so on3.
• Sandboxing is good.
• Restrict execution on the endpoints.
• A VDI can be useful because the attacker doesn’t know what additional security is happening in the hypervisor.
• Qubes is great.
• Priviledged access workstations are a pain in the neck.
• Whitelisting can be very useful. Yes, there are bypasses, but they need to be tailored to the environemnt.

### Say Yes

• Shadow IT is getting you owned.
• If you think that you don’t have a Shadow IT problem, you probably do. You just don’t know about it.
• The biggest gap for attackers is defenders who don’t know their own environment - and against sophisticated attackers it may be the case that the attaker has done more recon and analysis4.
• Everything seem locked down beautifully, but then there’s something sitting out the side.
• Cloud to on-prem compromise is common.
• If you say no, everyone works around you. If you say yes, you can make sure things are done right.
• You get the controls you need and the insights that help you understand risk.
• If the problem is something like a third-party box, then at least get some visibility of what it’s doing.

• Users know what’s up. They’ll tell you if something’s not right.
• BUT ONLY IF YOU’RE APPROACHABLE.
• This is an out-of-band monitoring system.

### Restricting Privilege

• Are we auditing the right things?
• Attackers have tools like Bloodhound now.
• e.g. how many people have seperate accounts for regular and domain admin profies - and then re-use passwords.
• Password management is awful. This is why people are working around it; for example:
• Bless.
• Identity aware access controls (only able to reach things the endpoint user is allowed to access).
• IAM delegation/firefighting attacks.
• No direct production access!

### Limit Macros

• At the gateway. Kill ‘em all.
• There are very few, if any cases, where there is a legitmate business case to accept untrusted documents with macros enabled.
• If you do need to accept files from externals:
• sign them.
• don’t use email for sharing, use proper file transfers.

### Situational Awareness

• Know when the house is burning down!
• You don’t need to read every log, but you need to be looking for signs.
• Work out baselines and do visualise.
• Avoid getting caught up in being so overloaded with false positives you ignore the real ones.
• Use canaries on your shares, documents, DNS, and so on.
• These keep attackers up at night.
• These are simple honeypots. You can be sure that if someone tries to RDP to a box that is never used, they’re an attacker.

### To Summarise

• If you do just one of these things, you’re better than most people. If you do two, you’re better than 80% of people out there.
• Something is better than nothing. Even a half-arse visualisation is better than nothing. SMS 2FA is better than no 2FA.
• Saying yes to be aware of things is the best thing you can do.

## Beer, Bacon, and Blue Teaming

Chris Campbell

This talk is focused on cost-concious blue teams.

• Threat hunting is the art of finding needles in haystacks.
• Hunting consists of collecting large datasets and identifying anomilies.
• e.g. looking at HTTP traffic leaving your network for unexpected user-agents and the like.

### OSINT

• Datasets can be obtained from open and closed sources, e.g. AlienVault OTX or MISP.
• These will ingest usable signs of potential intrusion: signatures, traffic patterns, and so on.
• Social media: you’d be surprised what you can get from e.g. Twitter.
• Home-grown intel:
• Honeypot with e.g. Cowrie or Dionaea. These give you clear information about attackers on your network.
• Honeypots are great for automated or low-knowledge attackers.
• Spamtraps are great. Malware is still heavily distributed by email.
• If you do identify attacks and attackers, you can use the information you glean for detecting future attacks.
• tracker.phage.nz

But this all sounds like hard work, right? And what do you do with it?

### Examples

• Lab setup:
• Windows 10 victim.
• Use VMCloak to hide the nature of the VM.
• Sysmon driver feeding into collector-sidecar, feeding into Graylog.
• Never connect directly through your ISP - use a VPN or somesuch.
• Filter out Windows Defender noise.
• Security Onion
• An IDS that can run standalone or in a distributed model.
• Based around a heavily enriched Snort.
• Sguild: Real-time monitoring of IDS alerts.
• Squert: A nice web UI that makes it easier to refine the information highlighted in Sguild.
• Bro Intel: A critical stack agent that uses the Critical Stack Intel Marketplace to get an external feed of signatures to enhance what you get out of the box for Security Onion.
• Security Onion now includes the ELK stack to make it even easier to understand what’s going on.

#### Network Hunting:

• Using a modular banking trojan.
• The trace starts with noting what looks like Tor traffic on the network, which you wouldn’t expect to see in a corporate environment (for example).
• Tracing the traffic back to the VM allows you check for other suspicious activity.
• In this case, it does: bro and snort have a number of alters.
• From there, drilling through the traffic history shows a GET request pulling back an executable.

#### Sysmon Hunting

• .NET based, sold as a “remote admin” tool. Used by everyone from script kiddies to professional criminals.
• Considered one of the best on the market.
• Sysmon logs show 4 command line events which are identified as suspicious.
• Looking into the first, it shoes a scheulded task being added.
• Searching around the scheduled task creation alert, various other activies are seen, such as hooking into the USB code.
• Threatcrowd confims the IP address that the executable is communicating with has been used in many attacks and is probably command and control bot which can be safely blocked.
• Various other activies can be traced: sample.exe copying backups of itself, downloading other binaries and so on.
• If you’re up to analysis, you can spin up Cuckoo Sandbox or the Payload Security sandbox.
• In the latter case they may feed back further information about the malware sample, which you can then feed back into your detection systems as a MISP or OTX event.

#### Hunting on the Endpoint

• ThreatHunting PS Module.
• This can gather fata of interest from remote systems, filter it, and identify outliers. Does require PowerSHell remoting.
• LOKI IOC scanner.
• Drop config items with known-bad exes, network addresses, and similar malware indicators. Yara Hunting: takes input from ThreatHunting and LOKI and visualise what they’ve found.

### Takeaways

• You can build a security system without buying a bunch of super-expesive boxes.
• Microsoft keep improving sysmon.
• Attackers are using more and more general-purpose tools rather than specialised ones.

## Pkit Finder

Casim Kahn

“phishing attcacks exlpoi weakkness of humaan brain for whcih no pach available!”

(A subheading which @pjf would be proud of.)

• Working in a CERT Casim sees a lot of phishing attacks in his day job - but most CERT is simply seeing an attack, filing a takedown, and moving on. He is interested in getting a better understanding of where the attacks come from and how they work.
• This involves delving into “the dark side”, since the information isn’t freely available.
• Phishing kits - you can buy a payload quite cheaply, or you can buy the developer tools and help.
• The people selling you the kits will walk you through how you need to modify the kit for your target. Customer service!
• You’ll need to pop your own server to host the phishing kit.
• And then you need to send the payload via an appropriate server.

• Stage 1
• Fetch URLs from phishtank and openphish hourly - about 40 - 50 per hour for NZ addresses.
• Manually added from the NZ banks, NZ CERT and similar.
• Stage 2
• Pull out interesting indicators from the phishing kits.
• Stage 3
• Hash out the unique kid identifiers and add to the DB.
• There is a high level of re-use in the kits.

As of yesterday, Casim has over 20,000 kits, using more than 30,000 email addresses across over 3,000 domains. This generates almost 4,000 unique hashes.

### Findings

• All the New Zealand banks as well as some British ones, are represented.
• Hotmail is still the first choice for the phishers.
• The phishers use brands as hooks: Apple, Dropbox, Google Doc, Paypal, and O365.
• There are kits being re-used for over a decade (same kit, new attacks).
• Some attackers have been targeting the same banks for years and years; some are common to phishing on several banks.
• Some of these may be people who are actual attackers; others are people whose ID has been compromised and used in attacks for years.
• People hand over amazing amounts of information.
• Everything: PIN numbers, security questions, you name it.
• crime-site.txt is a file name that keeps showing up time and time again.
• If you pull the file back, it will be full of goodness: specifically the command-and-control config for the phishers.
• Appears to come from one or more white hats who are attacking the compromised site, pulling down the phishing site, and then leaving the information about the attacker in the file.

Jeremy Stott

(More like within 15 minutes of real-time, but that’s good enough.)

Security Operations Engineer at Vend - broke barcodes and capacitive touchpads at Kiwicons.

How do you find out you’ve been hacked?

• Employee notices something strange.
• 3rd Party vendor reports a problem.
• Hacker reports a bug.
• You never find out.

The earlier options are faster. That’s where you want to be. The normal advice is “watch your logs”. No-one likes to watch logs. But people like to watch fire.

logfire.io: You send us logs, we set them on fire.

(It implements an endpoint which you POST a log, and then adds it to a blockchain. Until it runs out of memory and the blockchain gets deleted.)

OK, so fire for logs are not realistic. Jeremy looked at a bunch of options and settled on StreamAlert:

• ServerLess.
• Works on AWS.
• Python rules.
• Terraform samples.
• Multiple imports (S3, SNS, Kinesis) and exports (Slack, SMS, etc).

Kinesis is quicker (~10 seconds), S3 is slower (10 minutes), but Kensis has a lot of pain to use with CloudWatch cross-region.

(The AWS CIS Foundations Benchmark is a really useful starting point for understanding what sensible things to do are with CloudWatch.)

This is great and all: write alert snippets to trigger off lamba functions and run across your alert stream. But Jeremy can’t stop thinking about lp0 on fire. How can he draw better attention to the presence of errors? Maybe you could trigger something else.

Like balloons.

Jeremy has managed to find an air compressor. That can be rigged for remove control. “I may have a problem,” he opines.

Maybe. But as I’m watching the cloud inflate a ballon I reflect that it’s a pretty awesome problem.

## IOP: The Internet of Pancakes

Peter Jakowetz

Quantum Security and a background in electrical engineering.

While looking for a CNC machine, Peter found a pancake maker on TradeMe. It’s PancakeBot, and Peter and wanted to put it onto the Intenet. PancakeBot started as a home project that turned into a kickstarter, and is now onto its third revision. It takes GCODE, like a CNC machine or 3D printer, and turns it into pancakes, all for the measly cost of \$250.

Peter has identified the main codes that PancakeBot uses to send co-ordinates, control speed, and turn the pumps on and off. It’s built out of an ATMega 2650 (basically an Arduino), stepper motors, and a pump. The PancakePainter and PancakeBot firmware are open source and available on github; it all appears to be based off a older 3D printer and software stack.

The biggest obstacle to using it on the Internet was the fact the USB port doesn’t work out of the box. You can’t put it on the Internet if you have to move stuff with an SD card every time. So Peter replaced the original controller (after bricking it) with an Arduino controller and shield to drive the pancake maker itself, paired up with a Rasberry Pi running Octoprint to manage the printer.

Peter noted that he found a few hundred OctoPi instances available online. Which is less than ideal; apart from things like RCE, you can set printers on fire in a worst case scenario.

## Investigation of recent targetted attacks on APAC countries

Noushin Shabab

Noushin is a Senior Researcher at Kaspersky, specialising in attack investigation and forensics.

Stuxnet was an early major, high-profile APT. Since then APTs have been growing, both in terms of their number, but also their spread: they are no longer limited to “four Middle Eastern countries”, but are happening all over the world and across many industries.

### Major Cases

• Regin.
• Started 2003, APAC countries amongst the primary targets.
• Advanced, many vectors and countries.
• Nettraveller.
• Discovered in 2013, but started in 2004.
• Principally targetting India and Mongolia.
• A variety of tools and techniques.
• Dropping Elephant.
• Operating primarily out of India, targetting China and Austrlia amongst others.
• A more recent attack with new techniques and methods.

These are historical events; some current examples include:

• Lazarus:
• Ongoing threat, active from at least 2009.
• Responsibe for the attack on Sony Pictures.
• Multiple major attacks on South Korea.
• Initally focused on attacks against South Korea, but have spun off a side line in money theft and crypto mining (Bluenorroff).
• Constant use of new tools and techniques.
• Spring Dragon
• Long running with massive scale of operations.
• Main targets are around the South China Sea (initially Taiwan, and then spreading from there).
• Active since 2012.
• More than 200 C2 servers.
• Over 700 customised backdoor samples.
• Infiltation attacks: spearphishing, watering holes (compromised web site to target organisations in Myanmar).
• Currently appear to be launching a new round of attacks:
• Targeting high profile government orgs, political parties, educational institutions and universities, and the telecoms industry.
• The first two are normal targets; it is historically unusual to see attacks on universities, but they represent a soft target.
• Almost all APAC countries are affected, except Australia and New Zealand (so far).
• Their toolsets appear to be unusual/unique, highly customised. They are also changing and evolving. Customised per victim, both the payload and C2 backend.
• The C2 servers appear to be hardened, checking the client for validity to make it harder to examine them.
• When analysts have published the details of tools in the past, the Spring Dragon attackers have stopped using those tools and begun developing/deploying new ones.
• Every new release adds encryption and obfuscation to evade detection and slow researchers/defenders down.
• C2 IPs are all over the world, but more than 40% are in Hong Kong. Others are scattered around the world.
• Possible origins:
• Timestamps: with over 700 malware samples, most build activity happens at GMT+8, with a second shift, either later in the same time zone, or in another time zone.

## Gophers, whales, and clouds? Oh my.

Glenn ‘devalias’ Grant

Filthy deviant who drinks coffee with butter in it. Also a pen tester.

• Docker. Small, light, consistent, repeatable deployables.
• Glenn loves it for being portable.
• Serverless and FaaS.
• Running bare functions on someone else’s computer.
• Another abstraction above PaaS, effectively.
• Modular, easy to understand, easy to update independently.
• Lamba, Cloud Functions, Azure Functions.
• Golang
• Likes it because it “doesn’t have pointers, memory management, or things that don’t belon in 2017”.
• “Unfortunately not functional.”

An Unexpected Journey:

• Gobuster; gobuster is a directory and DNS and directory/file busting tool. It’s great but it’s quite heavyweight and takes a long time to run.
• Why not run it on Lamba?
• Well, Lamba doesn’t natively aupport go; but Apex allows you to wrap a Go executable and run it on Lamba.
• So: divide up your wordlist, run multiple instance of gobuster, and collate the results.
• After a bit of time spent disappearing down various rabbit holes, Glenn has 50 threads spun up on Lamba doing the cracking.
• Running about 50 lamba slices with 100 threads per slice seems to be optimal, which cost about 5 cents.
• Demo code.

So what else could we do? nmap? Fuzzing? Spidering? Bear in mind there’s a 300 second limit, and it’s not a conventional execution environment. So ECS (the AWS container service) might be a good alternative, but there’s still a lot of scaffolding to run data in and out.

Enter AWS Batch: define a job, connect a queue, run on ECS, collect output.

Another option is OpenFaaS, a project to allow FaaS on your own hardware; it makes use of Docker containers. It needs about 4 lines and one extra layer in your docker container to make all this work.

Some other tools:

• Kubebot is a slackbot which runs a security testing Slackbot that deploys to the Google cloud platform.
• UPX: The Ultimate Packer for eXecutables: using Golang and gobuster can pull you down to a megabyte per image.
• goland CLIs with Cobra - makes it easy to write good command line tools with Go.
• Clone the repo, start a new project, you’ve got a CLI tool.
• GopherBlazer
• Replace a pile of shell script wrappers with a single golang tool.
• Unfortunately there are too many ideas and rabbit holes, so it’s a bit stalled.

Some things to think about when evaluating container images:

• Is it the official container?
• How much is it starred? How often has it been downloaded?
• Is the Dockerfile updated?
• Is it an automated build?
• Is it recently updated?
• How large is it?

Takeaways: be curious. Play with new things to change your job. Don’t do things because that’s the way you’ve allways done them. Share what you learn. Let’s bring everyone up together.

## Secrets of a high performance security focused agile team

Kim Carter @binarymist

Writing “Holistic InfoSec for Web Developers.”

Purple teams: teams who are their own attackers and defenders.

### How Development Teams Fail

• “Hire code monkeys”. Ugh. Yeah, not loving this. Hi contempt culture.
• Reward pumping out features at the cost of technical debt.
• As debt mounts up, problems begin to mount up.

“Professional developer vs code monkey” is bullshit framing and frankly beneath the conf. Doubly so from a speaker who is obviously smart enough to describe how incentives create and encourage behaviours.

### How to Succeed with Security as a Developer Team.

• Security testing is part of sprints.
• A definition of done needs to be included in the definition of done.
• You want to continually catch these defects as quickly and cheaply as possible; the the further along the release pipeline you get, the more they cost.
• But pen testing is expensive! How can we do this?
• Define your security story. “Please refer to the first chapter of my book.”
• Establish a security champion; not someone external to the team, but someone from the team who wants to adopt this role.”
• Hand crafted pen testing is much cheaper at the start of development. “There’s lots of guidance in my book.”
• Automate security tools. “There is lots of guidance in my book.”
• Consuming Free and Open Source. “This is addressed in my book.” “This is risky software created by amateurs.” Apparently not the same quality as commercial software.
• “Don’t install node.js the official way.”

I stopped taking notes at this point, because this talk doesn’t deserve it. The only reason I didn’t leave early is because it would have meant walking over/through too many people to be polite. I would not normally be this negative about any talk, no batter how bad, but this was an abrasive sales pitch for a book that expressed bottomless contempt for so many people - developers, managers, customers, and - by running well over time - for the other presenters and his audience - I don’t feel compelled to be polite about it. Maybe Mr Carter is actually a nice person, but it sure as hell didn’t show here.

If you want to throw money at someone to help you with these sorts of problems, talk to safestack.io or Eiara, who will actually help you and won’t be dicks about it.

## Lies, Damned Lies, and Security

Michael Shearer

What can CERT NZ do for you?

• Incident response and analysis.
• Co-ordinated vulnerability disclosure.

CERT want to know everything they can about what’s going on in New Zealand: they’d like to help you with ongoing incidents, but they’d also like to know about incidents that are done, too. Everything they can learn about what’s going on helps. However, CERT is not:

• The Internet Police.
• IT support/helpdesk.
• A security agency - “we are “100% blue team.

Problems we see:

• “The workstation LAN is trustworthy.”
• The edge/DMZ gets all the attention, while the inside is soft and delicious.

We need to be better at giving the users tools to be better. Have password managers as part of your standard build for example.

Perfect security is hard, but basic security isn’t.

• Don’t try to eat the elephant n one bite.
• Incremental improvement today, rather than perfect one day.
• Accept that sometimes the new things isn’t perfect, but is still better than the current state.

Running Linux or whatever doesn’t make you safe. It’s not about the OS, it’s how you configure it. Basic hygiene is still critical.

Consider ssh keys: ssh keys suck (long live ssh keys). So what does ideal ssh look like?

• Gotta be easy to use.
• Gotta be easy to deploy.
• Stop attackers piggybacking off sessions.
• Protect key material from attackers.
• Material should be securely portable.

Michael gives a demo of how to use a yubikey to secure your ssh keys. One of the things it demonstrates is that PGP is not transparent even to smart people!

## End of Day 1

Off to a flying start. Things have been well-run and I’ve enjoyed (all but one of) the talks. I’m really impressed by what the team have put together, and I’d make special note of the abundance of blue team talks, which gives a different and valuable spin for a security conference. Many thanks to Erica, Kate, Skooch, and Chris.

1. I like many of the people I work with but I’m not wild about spending my personal time with them, especially when I’m so lousy at keeping up with my non-work friends. And, moving beyond the personal, there’s a lot been written about how this kind of moving into personal time can create unhealthy and discriminatory structures in the workplace: if the key to advancement is to spend evenings or weekends at happy hours, you build in a work environment that is hostile to, for example, people with childcare responsibilities.
2. I have a whole rant about this. Just as cryptocurrency is rediscovering the principles of financial systems from first principles, infosec is rediscovering the principles of risk management from first principles. One of the most neglected points here is that security is not, in and of itself, an absolute virtue. It’s possible that the risk around a security breach may be lower than the cost of an outage for a patch going wrong.
3. If you’re obsessed with locking down your domain admins while leaving open user accounts that pay money out of your company, you are probably doing the wrong thing.
4. A thought that came to my mind for this: tying back to the keynote, when you are a punishment-oriented infosec team, you will experience this, because you’re an scary obstacle no-one talks to.
Share