So today it was me skipping the keynote. I am ambivalent about a tour of the good ol’ days, but it sounds like the folks who attended enjoyed it, so that’s nice.
Security Vulnerability Mitigations
“This is a high-level talk aiming at breadth, rather than depth.” Talk delivers - I struggled to keep up with the volume of material Joel shared with us.
First, some definitions so we can proceed with some shared assumptions: what are security vulnerabilities? Flaws in the design or implementation of a piece of software that is advantageous to an attacker; this can mean many, mean things, but the holy grail is Remote Code Execution. Within that, though there are many categories and mechanisms.
Mitigations, on the other hand, are techniques that make it harder to exploit a vulnerability. Ultimately we want to make it impractical to exploit vulnerabilities in the real world; we need to accept there are always going to be vulnerabilities, so we need to raise the bar, probably with multiple mitigation techniques. Most of these techniques fail closed - they so the safe thing on failure.
About the OpenBSD Project
- A BSD kernel and userland.
- Home to a number of opensource projects, most notably OpenSSH and tmux.
- Heavily focused on R&D for security vulnerability mitigations.
- OpenBSD both comes up with its own ideas, and has implemented ideas from academia and other software projects.
CPUs and Stacks Revisited
- CPU provides instruction(s) for function alls.
- Pushes the Instruction Ponter (IP) onto the stack and changes the IP.
- Starts executing next piece of code (the function).
- CPU provides instructions for returning from functions.
- Pops Return Instruction Pointer (TIP) from stack.
- Returns to the caller and resumes execution.
In the beginning there were buffer overflows: finding a way to overwrite the RIP on the stack, often due to incorrect or missing bounds check. The payload would also include some form of shellcode to run. “Smashing the Stack for Fun and Profit” is the seminal work in this area.
A mititgation for this is the W^X technique, where memory regions are marked as “write or execute”. This makes it impossible to execute memory that has been written to. Unfortunately, this mitigant isn’t in use everywhere; for example, it’s common for JITs to ignore it.
Another mitigation is Address Space Layout Randomisation (ASLR). By placing code, data, snad stack in different locations, rather then well-known ones, it is much harder to implement the shellcode payoff.
Stack protectors provide a stack canary, which is uniqe per object file. If we’re returning from a function, then we check whether the canary is intact. If it isn’t we can assume something bad has happened, and halt execution.
Return Oriented Programming
This is a technique where the attacker provides a sequence of stack frames to form a “ROP chain”, a set of return addresses with a little data mixed in. The stack pointer is changed to refer to this sequence.
The chain is a set of what are called “gadgets”. Each gadget is a small piece of functionality (add, subtract, etc). These primitives can then be build into something more interesting. This can also use a technique called polyret - polymorphic gadgets - where the attacker uses code legitimately on the system, and uses snippets of it to act as gadgets.
This renders W^X inffective, because it’s no longer a requriement to execute code.
“When Good Instructions Go Bad” was the first ROP paper; building on that we have _Blind Return Oriented Programming”. This uses a stack overflow and a service that restarts when it crashes: you then remotely construct a gadget for the write system call, which downloads more gadgets.
So how do we mitigate this?
MAP_STACK is an mmap flag that marks memory as being stack memory; the SP is checked on entry and exit for whether it’s pointing to memory which isn’t marked as being legit stack memory.
retguard computes and stores a random cookie for every function which is allows us to check whether return values are what they should be.
trapsleds rely on attackers using NOP sleds as part of their attack. By replacing the NOPs with other instructions, notably INT3, a sloppy attacker will end up triggering debug mode.
Gadget reduction: this is a cleanup of the OpenBSD code to look at different ways to produce the same thing. By changing generated code to use different registers or opcodes we can eliminate ployrets. These techniques have resulted in the OpenBSD AMD64 kernel being gadget free. That makes it much harder to build useful ROP changes.
privdrop is about dropping privileges as soon as possible; for example,
tcpdump in OpenBSD doesn’t do packet parsing packets until it’s dropped privs.
privsep breaks programs into pairs, with a well-defined API between priveleged and unprivileged components, minimising the ability to escalate a breach in the unprivileged code.
fork+exec/re-exec can use ASLR to force a new layout, which makes BROP harder.
Kernel Address Re-Linking (KARL) now sees OpenBSD shipping the kernel as a link kit, with the layout of the kernel changing at every boot, making ROP chainges very very hard to implement.
pledge is a spec where the program, at build time, specifies what functionality it needs in terms of system calls, and can reduce (but never increase) the number of calls. If a program starts trying to use calls that it hasn’t pledged it can be terminated. This is very powerful in combination with privsep.
unveil hides visibility of the filesystem, then unveil parts a program should see (oh, hey, it namespaces!).
Finally, we need better APIs! They should be easy to get right, and hard to get wrong. For example,
strlcpy(3) does things such as correctly terminating a string as part fo the API.
omalloc automates many of the things people get wrong.
Facebook, Dynamite, User, Bombs, & You
The story starts with Herman Hollerith, the son of German immigrants, who became an engineer working at the US Census Bureau. At that time, processing the census took years to process results, even though they were only collecting a headcount. Hollerith set out to build a machine that would help tabulate the census results; it was ready in time for the 1890 census. This was a punchcard-driven electro-machanical system which dramatically improved the ability of the bureau to process more data - 80 cards per minute per operator. Hollerith founded a company to market the invention. Over a series of mergers the company we now know as IBM in 1926; this was headed up by Thomas J Watson in 1914 (while he was in jail) until he died in the 1950s. Lana noted he does not like writing things down.
Between 1911 and 1949 IBM had a subsidiary in Germany called Dehomag. The were used by the German government for census data and banks. The tatoo on the arms of concentration camps was their number in the hollerith system.
In the Netherlands more than 70% of Jews were killed; they had extensive Hollerith infrastructure. In France, where a number of disorganised competitors exists, 25% of French Jews were murdered. This pattern repeated across Europe.
We know about the Nuremberg trials. But many of us don’t know how wide-ranging they were. They not only covered the senior Nazis, but industrialists as well. Men like Thrupp, for example, were imprisoned for the role in suppling the Nazis. You might imagine IBM would face scrutiny for their role in facilitating the Holocause. But they weren’t. Instead they supplied the translating equipment. Even today there isn’t a mention of Dehomag or IBM in the German Holocaust memorials or Nuremberg histories in Germany.
Hopefully what you’ve picked up so far is that Hollerith was a brilliant young man who solved an interesting problem; it’s not his fault it was used to facilitate the Holocaust, nor is it his fault that no-one was held responsible. But those things happened, even if IBM have largely managed to write it out of history.
Let’s talk about Alfred Nobel’s “oh no” moment: when a newspaper accidentally published an obit prematurely, it described him as the greatest killer in history, thanks to his work in developing explosives; sure, he’d developed them for civilian purposes, but as the obituary noted, they quickly came to be used as weapons. Nobel was shocked, and established the Nobel Prize.
Otto Hahn (and Lise Meitne and Fritz Strassman) worked out nuclear fission, which was a good idea. He got a prize. Unfortunately we got the atomic bomb. And ethics committess in chemistry.
Eugenics was a way of stopping people we didn’t like from having babies. It is now considered to have been a particularly bad idea - it was tremendously widespread in many Western countries before WW II, with eugenics committees in 33 states.
Thalidomide was sold over the counter in 1957. It cause horrific birth defects (and the lawsuits in Australia were only settled in 2012). We have put a lot more scrutiny and overside on medical research sicne them.
VW and Uber both wrote software to cheat regulations: VW’s engineers went to jail. Uber’s didn’t. What’s the difference? Well, VW is a car company, and Uber is a software company.
Change is good! Lana doesn’t want to stop change! But we should do more with our “oh no” moments. Lana suggests we should have had our oh no moment after IBM facilitated the Holocaust. But we didn’t.
Now we can’t blame the founders of the Internet for 4Chan. But we can certainly wonder why social media sites have done next to nothing about being such a pile of garbage that do so little about their well-documented problems.
Lana talks about the [Therac-25](). The software was mostly written by one person, who disappeared. It led to nothing more than an out-of-court settlement.
There’s more than AI and machine learning than playing chess and winning game shows; AMazon use it to reject women who are qualified from jobs. And if we can’t build learning systems for self-driving cars, perhaps we shouldn’t give them weapons.
Or think about the Chinese surveillance state: in highly scrutinised states with china, checkpoints exist every few kilometers. You hand over all your devices to be downloaded while you pass though them. Every citizen in the province passes though one of them several times a day. The streets are lined with poles, using facial recognistion for pervasive monitoring of the population. If the state is displeased with you, you go to a re-education camps. No-one knows how many people are locked up as a result of this dragnet.
Palantir’s technology has been deployed in Los Angeles to, as Lana puts it “put black people in jail”, allegedly before they commit crimes.
Who finds the bugs? We do!
So what can we do about this? All of us, in our day jobs, work on things as though they were isolated from a broader context. And we should change that. We should be asking what could go wrong; we should be asking how they could be misused by a malicious actor; we should think about how things which are benign in isolation can become terrible in combination with other services. Facial recognition is fine on it’s own. And push for meaningful enforement for reckless or deliberate behaviour, better guidlines and regulation.
- The GPL’s fourth freedome is to make software useable for any purpose. Should we rethink that? Open is better than not, but yes, think about this on a case by case basis.
- Do you know of any companies who have ethics guidance or boards? Google anf Facebook are working on something, but it’s about enforceable accountability.
- What about the AA Bill forcing people to do unethical and immoral things? Yeah, that’s a big step back.
- What do we do when the government is the problem? Vote in a better government.
- Our company put technical limitations to make it more difficult to do unethical things? That’s good! But we need more meaningful oversight, not just individual efforts.
1space: Sharing data across clouds
John Dickinson @notmyname
The problem: we’ve told people to consolidate and use cloud infrastructure, and they have. The problem is that they’ve ended up in a handful of silos. And it’s naive to assume that we can readily compete with companies dumping billions per quarter into their infrastructure and development.
Even if you have a good in-house setup there are cases where it would be silly not to use public cloud compute, but you may be using data that you need to retain control. Another use case is people who would like to migrate off a particular cloud, or even all clouds, for cost reasons. But your apps need to keep moving while you do that; you can’t switch things off as you move in and out of their clouds, especially if you have large amounts of data.
1space is designed to run on top of the OpenStack Swift object store. From there, it needs to understand the low-level constructs - APIs, authz/n systems, and so on - as well as apply policies: when can I transfer? Do I want to transform data? Do I want to move or fence based on data or metadata? Do deletes propogate or not?
Inside 1space you have a JSON file with a source, a destination, and the policies in one place.
A key element of the design is to automagically make items well-formed for the place they reside. If an object is in Swift, it needs to be formed like a native Switfh object; when it is moved to S3 is must be reshaped to be an S3 object, not to put a Swift-shaped object into S3. This is added as middleware - an invisible inline component that forms part of the request handling that the Swift API performs. This proxy also handles providing namespace unification.
This would all require everything in the cloud to call back to the Swift environment, which is not necessarily ideal. Instead, there is a Cloud Connector, another connector which provides a stripped-down S3 API in the cloud, and understands the namespace mappings; calls to the Connector will be redirected to S3, or Swift, or another cloud as necessary.
The code is open, and John would love to have help with:
- NAS connector.
- Azure support.
Maintaining the Unmaintainable
Picking Up the Baton of a Secure Kernel Patchset
Note that this talk reflects the state of the Dapper patchset. grsec has been around for some time, and introduced many great security features. grsec closed their code, leaving Matthew stranded - his distro relied on grsec to provide a more secure kernel. So he either had to abandon kis distro, or become a kernel developer; fortunately he chose the latter.
The patchset has a reputation for being fairly scary: the last grsec that provided the base for a fork is a monolith that is 10.3 MB in size, and over 300,000 lines of changes. Moreover, the documentation is not very helpful: most of the PaX Team documentation is very dated; a mirror of the commit messages by Minipli; Kconfig entries, and what you can find on the interenet. The maintainability if terrible; there are few, if any, comments.
On top of all that the patchset tends to break userland!
Note that many of these features have a cost, either in terms of breaking userspace, or performance, or both,.
- GCC plugins: the most portable and easy to maintain.
- GRKERNSEC_RANDSTRUCT randomises the layout of structs, but is only really effective for self-built kernels (distro kernels have well-known seeds) and was upstreamed in 4.13.
- PAX_MEMORY_STACKLEAK scrubs the stack before returns. This was upstreamed in 4.20.
- PAX_MEMORY_STRUCTLEAD i/sets another stack cleanup.
- PAX_SIZE_OVERFLOW reports underflow and underflows in ints. This logs to dmesg and kills the guilty process.
- PAX_RAP prevents ROP and JOP attacks, implementing control flow integrity (noting breaches of the expected execution flow).
- Kernel enhancement.
- PAX_MPROTECT modifies MPROTECT to disable dangerous functionality, stopping memory being remarked in ways that make things less sage. Unfortunately this breaks userspace - anything with a JIT like your browser.
- PAX_W^X: Implements W^X
- PAX_MEMORY_UDEREF: Prevens the kernel dereferencing userspavce pointers when it’s expecting kernel pointers.
- PAX_ASLR: This is mostly upstreamed, including KASLR (kernel randomisation).
- PAX_MEMORY_SANITIZE: erases memory pages and slab obkjects to reduce the risk of information leakage.
- PAX_REFCOUNT: detects and prevents ref counting code from overflowing. This was upstreamed in 4.11, 4.13, 4.15.
- PAX_USERCOPY: Makes the kernel enforce sizes in object copies between kernel and user space.
- chroot() modifications that make chroot() act more like a BSD jail; note these can’t be applied selectively.
Feeling Left High and Dry
- Matthew didn’t want to lose the features, and thought the patchset was pretty cool.
- He decided to have a go at kernel development, treating it as an opportunity.
- So he decided to try moving the patchset to a newer kernel.
- He split the monolith into individual files.
- Started naive attempts to automate, using
- Unfortunately the fuzzy placement by wiggle wasn’t accurate enough.
- Abandonded the first effort.
- Managed to get a compile, but no boot with a 4.11 kernel.
- When 4.12 was released, he started on the new version, but it was incomplete by the time 4.13 was released. On top of that there were major changes to the MM systems, requiring even more rerites.
- With 4.13 the merge was complete, but it didn’t compile… and then 4.14 was released.
- 4.14 would be an LTS kernel. Just get there and let things settle.
- 4.14 was an incomplete port.
- “I was kind of forced to rethink my strategy at this point.” The kernel moves too quickly to rely on automated tools.
- Forward porting is a very hard task.
- New versions were released too quickly.
- Decided to sit on 4.9, which was an LTS, and maintain the minor releases.
This strategy was much more manageable, but still a surprising amount of work, with point releases having a two times a week cadence. This kept going on for about a year, but at that point the patches were starting to get laeger and larger.
And then Fedora changed to gcc 8.
And then Meltdown and Spectre really, really threw a spanner in the works. * KPTI and UDEREF are fundamentally incopatible. * At this point, it required cherry picking patches. And revert Spectre and Meltdown. * Most users, asked to choose between kernel hardening for Meltdown/Spectre, abandoned the patchset.
One by one, the grsec maintainers began to flag. Matthew was the last person standing, releasing in October 2018. He could continue, but it’s a huge amount of work for fewer and fewer users. At the same time, the community is upstreaming features one by one.
- Forks can go away at any time; they can go away at any time.
- You have it within you to maintain your favourite projects for a short while.
- Upstreaming will always have a larger impact and lifespan than forks.
- Maintaining this patchset has taught him so much about how the kernel works.
And that, folks, is it for another year. I learned a lot, I caught up with conference buddies and friends, I networked. LCA is a gem of a conference, and I am rarely anything other than delighted by it. Congratulations to the team for this year, and I hope to be back for next year’s effort on the Gold Coast.