Firstly, well-done to KiwiPyCon for keeping a hybrid conference running, in this case with the same Venueless setup that worked well for linux.conf.au this year; I hope this beomce the new norm for conferences, but I guess we shall see.
The team with 30 million players
Laura Bell
Laura is very pleased by the architecture, but refuses to be diverted into discussing the building; she wants to convince us to become the world’s biggest security team: “The best security isn’t noticeable, it just makes everything better” and her goal is to make it the ubiquitous ingediant.
Where does the 30 million come from? Well, it’s the number of developers in the world - but it doesn’t just stop there. There are people who “aren’t really developers” who are integral to development happening.
And it’s growing - at the rate of 6% per year.
The security world is not as mature as we’d like to think - Laura would suggest only a couple of percent are on the cutting edge, and doing most things well; another 15% are in what she describes as “fragmented exploration”, trying to adopt better practises, and the rest of us are mostly muddling along with no or only traditional security. So what to do about it?
Well, COVID-19 was a pause; SafeStack dropped over 90% of its revenue, leaving Laura time to think, and observe how COVID response resembled good security.
Tracing
The COVID app that we used for tracing - Laura is amazed that getting this out in a timely fashion was remarkable. Tracing was a critical part of the COVID response, but also in software. It’s easy to fall into the trap of taking a self-centric, simple view of how our code works and what it depends on. In fact, most of our code depends on other people’s code, and that code depends further on other code. As with COVID tracing, the challenge in security is that these dependencies are poorly understood, if at all, which makes it difficult or impossible to reason about whether a security flaw is a problem for us.
Laura uses a map of npm, but is quick to caution against complacency: all packaging systems have one version or another of this problem: if your libraries are large, each component has a higher risk, but has fewer connections; if you have many small packages, each one is low risk but hard to trace - and in reality, we rarely have a simple dependency on one language. This risk - of inherited risk - is transitive risk.
Laura notes that a good attacker is lazy: crafting an attack for a specific target is too much like hard work for the benefit. It’s much easier to attack a commonly-used package or piece of infrastructure; a supply chain attack. Laura offers a sucessful attack on Heroku, which then allowed compromise Citrix, ThinkMD, and dnsimple amongst others.
- Know your stack and map it.
- Choose technology carefully: why are you adopting a technology? Do you have a checklist to evaluate whether you should be using something?
- Keep up with security news!
Distancing
In COVID times, we’ve used distancing and masking to help reduce the rate of transmission. If you’re an architect - and that doesn’t mean a title, that means as someone who designs systems - you need to think about the shape of your systems, and what controls you can use to keep them safe.
Laura asks us to imagine trying to keep a bear from a pot of honey - what would we do? Use traps? A security team? Obstacles in the route? She would like us to assume that we can’t trust anything (“zero trust”); assume that everything is chaos! It can be very freeing. Place controls everywhere - consistently sanitise input; consistently log; use zones of trust, and many other techniques.
A lot of this is free! Laura mentions that OWASP has threat models freely available: the OWASP Threat Dragon; the Threat Modelling Cookbook; the Threat Modelling Manifesto.
- Design for security.
- Embrace zero trust.
- Bring everyone to the party.
Testing
Testing is often poorly done, or not done at all. It can’t prove there are no problems, but it can find problems. Testing shouldn’t be what you do last, or at one point; it ought to be done all the time, and part of the everyday experience. We really shouldn’t rely on manual testing; because it’s too hard and expensive to do, it doesn’t happen enough. We ought to avoid it - or work out how to automate it. In security, this is a big problem, because many of the traditional tools are heavy and manual.
We should think about dependency checking, amongst other things.
Prevention
In COVID times, a key preventative has been vaccination; in software, there are a number of preventitves - and patching is one of the key tasks. So why don’t we do it?
- The code is so old that no-one wants to risk patching dependencies.
- The security patches are rolled in with breaking changes.
- There isn’t actually a security patch available.
So what do you do about those systems?
- Fork and fix? Sure, if you’re Batman. But try to push things upstream - you don’t want the maintenance burden of carrying that fork around forever.
- Ignore? What’s the worst that can happen? Sometimes it’s not actually that big a risk if you have no compensating controls.
- Change the library? It’s not as easy as you think.
Look at the edge cases - the ones that you don’t have source for any more, the code that’s too hard to change.
(Bad testing is a huge blocker on patching, of course)
Response
Prepare yourself - have an incident response plan before you need to do the thing. How do you assess risk? How do you judge the response? Think about your scenarios, think about what you’d do in a real scenario; document it; and test via tabletop.
If you’re well-drilled and have plans that you’ve prepared before the stress of a real incident, you’ll have a more effective response when you need it.
Summing up
To keep ourselves safe, we need to keep all of us safe. Safestack will be contributing by offering free security training for every new software developer in New Zealand as a program - and Laura is asking the older and more experienced of us support a new generation who care about security.
A Guided Tour of Dunder Methods
Tom Clark
Much of the expressive power in Python built-ins can also be granted to our own classes via dunder methods. There are a huge variety of these methods; Tom isn’t going to tell us about all of these. You know these: in the __init__
function, the double underscore in the object initialisation signals to the Python interpreter that it ought to invoke the Dunder method (Dunder new) under the hood. Tom cautions against using the freedom Python gives us to create our own dunder methods, or override existing ones, but pleads with us not to do so because of the bugs and confusion that will result.
Tom runs through some common dunder functions that you can use in your own classes to make them more idiomatic, easier to use and maintain, and “more beautiful”.
Exercises and slides available on github.
Introduction to building a web app with Python, Flask and SQLite
Steve Dunford
A very good tutorial on standing up a basic web app in Flask, adapted from Steve’s classroom work. Nicely paced - sadly not as common as it ought to be for tutorials. I found the pace both relaxing - I never felt hurried and had time to solve problems, like the sqlite DB being corrupted - but not too slow. My only quibble is that I’m not sure it does people any favours to show them nesting SQL queries rather than using joins - I genuinely think the former are actually more complicated than the latter. Mind you, I’ve never tried teaching them to a room full of people, and Steve has, so it’s quite possible my opinion is nonsense.
Steve’s step-by-step notes are available as a Google Doc; you can do the exercises with your preferred development environment, so long as you have Python, Flask, and sqlite, or you can use the repl.it online IDE, which smooths over a lot of the underlying Python plumbing, especially the long and tedious arguments over how to manage environments and the like; I went for the latter and found it a relatively comfortable experience.