LCA 2019 Day 2

The usual pleas for huddling are aired to a room full people who can’t bear to sit too close to one another, apparently. We open with minders of the usual business - emergency numbers, evacuation procedure, code of conduct, etc.

Today’s door prize winner is here!

It’s a sad thing that OS shitlording has to be called out.

Keynote: Beyond README.md

Rory Aronson

Rory kicks off his talk with his experiences in gardening, both personal (doesn’t care enough not to end up with wilderness) and industrial (the LettuceBot, which weeds at an industrial scale). Where, Rory wonders, is the LettuceBot for his back yard that will allow him to have the garden of his dreams automagically.

Well, it didn’t exist. Rory kicked off a project to repurpose open source 3D technology into personal scale device - the FarmBot. He started out with documentation: specs, bill of materials, you name it. He open sourced the documentation, hoping that this would encourage other people to help him produce FarmBot; Rory is a chemical engineer, so he wanted programmers, designers, and so on.

People started flocking to the idea; they read the white paper and found things to do. Tim, the project co-founder, had a background in firmware development, and built the first prototype. From there, things began to pick up. A grant from teh Mark Shuttleworth Foundation helped underwrite the project, with Rick joining to develop an interface for the FarmBot. By 2016 they got to the point where they were growing home-scale gardens with the help of the FarmBot. At this point, they had a lot of people who wanted a FarmBot but didn’t want to build one from scratch; at this point they started to look at how to productise them.

It was a wild success. $800,000 worth to fundraising success. Which was a problem of success: they had built ten to date and now had to build thousands; now they’re up to version 14 of FarmBot, and they’ve tracked that in their whitepaper and subsequent documentation.

  • They maintain a changelog for both hardware and software.
  • For open source hardware, assembly instructions are as critical as for traditional hardware; but on top of that they release all the associated CAD files so you can manufacture your own. You can use the web-based CAD to inspect the parts and export parts, including STLs for printing.
  • They keep a current bill of materials so you can buy them yourself if you want too.
  • Everything comes with photos and videos to make it easier to understand what you’re doing.
  • They clearly delineate well-understood and implement things from ispiration/ideas.
  • Everything is versioned.

This makes it easy for people to build on the base project; students in particular have modified and extended them to build and experiment with them.

Rory works through some other areas:

  • Codes of conduct. How do you select people to manage a CoC? How do you do incident review?
  • Open Source Business: if you have a business, how do you open source it; not how do you do open source, but how you do you open source your model. Competitors should be collaborators; how can we learn form one another. To this end, Rory has released a documentation area for how FarmBot, Inc works: how taxes have to be filed, how parts are sourced, how margins work, compensation formula, replacement parts and returns, and so on. It’s also great for onboarding people!
    • Buffer has been an inspiration for this. They even have public Trello boards for their rodmaps!
    • They’ve had great feedback from this, ranging from startups to exec directors at research institutes.

When we open source things, we empower them: to make software, hardware, to build businesses and communities.

Design for Security

Serena Chen @sereena

“There is a pervasive myth in our industry, that Usability and Security are mutually exclusive.” In fact, good security and good experience design cannot exist without each other. And everyone deserves to be secure without being experts, but our current way of working expects people to become security experts: in fact, no-one really cares about security. “Given a choice betweem dancing pigs and security, the user will pick dancing pigs every time.” And we shame people for this; in fact, in security, there is a pervasive culture of shame. We laugh at people for getting scammed, or for not being able to use unusable software.

But shaming people is lazy.

People want to complete tasks, and we have failed to provide an easy, secure way of doing so. And that’s on us, not them. People shouldn’t have to care about security; it is our job to care about building secure, accessible, performant software. Compare PGP to a YubiKey or Signal. This is where design thinking should take us.

Paths of Least Resistance

In security we tend to put lots of walls: don’t do the thing! Don’t use the things you need to do your job! Often this is because security isn’t included into the end. But rather than putting up walls, we should carve rivers. The trivial path - the one that happens if you don’t do any thing - is the one we should strive for. Sensible defaults! Normalise security! Don’t give people extra homework.

  • If you need a phone number to verify an account, ask up front, don’t bury it as an extra task!
  • Group similar tasks together; put all setup tasks together, don’t bundle it with other tasks.
  • Bear in mind that people are efficient.
  • Align our goals with the end user’s goals.
    • Friction doesn’t work: people are focused on getting shit done.
    • Every false alarm reduces the credibility of warnings as a whole.
    • This is proven by Science! (MRI machines!) People literally stop visually processing warnings after repetition.
  • Shadow IT is the one people actually use, as opposed to the one that makes it too hard to do their job.
    • Password rotation is bypassed with sticky notes or incrementing numbers or re-use.

So how to we fix bad paths?

  • Use security tools for security concerns.
    • Don’t mis-use security to stop other things.
    • People over-using YouTube is not a security problem, it’s a people management problem.
    • Treating this as a security problem undermines security credibility.
  • Serena is a fan of the BeyondCorp model, because it puts the onus on the security and systems team to make things work and easy, not the users to understand the security model.
    • It also forces you to think about what you actually expect people to be doing.
    • There was a strong focus on making the security tools invisible to the end user for the majority of cases.

Intent

We forget about intent all the time: in fact, it’s the heart of the tension between usability and security. Because intuiting intent is hard, we tend to fall back on patterns. “As a designer I want to make everything easy! As a security person I make things hard so their secure!”

Out job is to make a specific action, at a specific time, in a specific place, easy. Everything else we can lock down.

Knowing intent = usability and security as an outcome.

Ask yourself: if we don’t understand intent, why not? What are our gaps in knowledge about identity, location, norms? This doesn’t need to be hard, either.

(Mis)communication

Miscommunication is a human security vulnerability. This is how social engineers own you. Think about the green padlock: what does the secure padlock mean? https? A valid CA? The domain name is what the URL says? But does it mean the site is secure and safe?

No.

But that’s what most people assume, because that’s what we tell people. It’s easy to create a phishing site because we’ve trained people to assume that the green padlock means “safe”. Even if it’s a fake site with a fake URL etc.

Mental Model Matching

This is the most important thing of all: your user’s mental model and expectations are what define whether you will be secure. Not your stack. “A ssytem is secure from a given user’s perspective if the set of actions are bounded by what the user believes they can do” - Ka-Ping Yee.

We need to find the user’s mental model and match the system to that. The best way of doing that is to observe what users believe and do, and do so without judging. If you work with UX people, for example, you could sit in on user interviews and usability sessions. You will be surprised to find your assumptions destroyed. If you can’t get to these sorts of sessions, observe friends and family to see how they interact with tech. And you can infer through context.

We can also influence people’s models. When we make a thing, we teach - whether we mean to or not. So think about how you are already influencing your user’s mental model of things work; consider, for example, that for many years iTunes would randomly pop up and ask you to enter your credentials. Which trains people to assume they should always be entering their iTunes password whenever their iOS device asks. Is that the lesson we want them to learn?

Are we teaching our end users to click through warnings? Are we training them to consider security a nuisance?

Summary

  • Cross pollination is rare. This is a huge missed opportunity for designers and security people!
  • Out jobs are based on outcomes based on specific outcomes.
  • Always align your goals with the user’s goals.

Kernel Security is Cool Again

Casey Schaufler

Been doing kernel development for 40 years and security for 30 of that. Security goes from being cool to uncool in cycles. At the moment, security is cool again. Why? Well, Spectre and Meltdown. While the class of bugs that led to these were well-understood in academia, this was the first time practical exports were published. Around the same time grsecurity went dark. All this has happened concurrently with the rise of the popularity of containers, which have required enhancements.

Unfortunately security models from the 1970s and 1980s don’t really work well with how we use computers today. We have a lot of challenges.

Let’s start with the question of hardware: what is to be done with hardware. Hardware is made by people who are really, really, really clever. And the people who use the hardware don’t want to have to be clever about it. And there are multiple architectures to deal with it. Unfortunately we decide bugs are hard, so we add hardware features: encryption, memory protection, isolation. It would be fair to suggest Casey has some skepticism about the value of some of these proposals. Per Serena’s talk, the model hardware developers have of what software developers want to work may be mis-matched.

We also need to address the problem of the boot process: how do we know we’re running what we think we’re running. Trust and attestation are hard problems.

Kernel hardening: what is it? Well, it’s a response to 50 years of bad coding. Think of it this way: there’s 24 million lines of code in the kernel, so that’s 24 one in a million errors. We still use the same unsafe programming languages and library calls: unsafe string handling and so on. But we have some new techniques to help with this; we have scanning tools and compiler protections to help us to make better code and save us from ourselves.

Containers: they are not in and of themselves a security or kernel thing. Namespaces are, though. LSM namespaces are coming.

New security models:

  • Landlock. You can put a BPF program on every system call. This is a form of self-control: use BPF to descrbe expected behaviour and kill out-of-spec behaviour.
  • SARA: Memory protections with a plug-in architecture so you can e.g. filter USB behaviour.
  • WhiteEgret: Execution whitelisting, so only the code you expect to find on the system can be run.
  • Security module stacking: allows you to combine the modules as you see fit, and have them playing together nicely. You can keep selinux and then add new tools.

So, do you want to join the cool kids? Well, to join the fun, you could find Kees Cook and ask how to help. (I have Feelings about the value of a project that whines that no-one wnats to run upstream kernels.)

What are the challenges for the future?

  • Connected devices: no users in the traditional sense. And many are smarter than they need to be.
  • Special purpose processors: everything old is new again! What are the implications of sharing these devices on multi-user systems? How do you trust them? How do you know what they’re doing? GPU, crypto processors and so on.
  • There will be bugs. New bugs will happen. How do we find them and fix them?

Q&A

  • How long will it be cool for? Generally we can count on 18 months. We’re aboiut 6 months in. And then there will be a two to four year hiatus. “This is a very strong cool period.”
  • When you mix different security policies conflict, how do you make that work? Unfortunately you have a wonderful situation where you have a brick. Unfortunately you can really only soften one of your approaches. Your system needs to be usable.
  • How do we get this from the latest kernels back to older kernels? You have to backport. That’s really the only choice if it’s less painful than trying to get old devices supported by newer kernels. Unfortunately there are lots of devices with 2.6 or 3.18 kernels.

Stories as Systems

Mark Rickerby

We are drawn to stories - and one of the first things we do with every medium is turn to storytelling, sometimes before people working on the technology even understand what’s happening.

The paradox of storytelling: it is a structured activity, but it resists being put into a system, even though many people have tried to do this. There isn’t a unified theory of narrative that can explain how stories work: while we have the idea of the monomyth from Campbell it doesn’t really hold much traction amongst (for exmaple) anthropologists; the idea that every story is opne of the hero’s journey doesn’t really hold up to what exists in the world. Even amongst afficiando’s of Campbell’s idea, there is little agreement on the structure of the hero’s journey.

Another way of looking comes from Aristotle: the three act structure. Freitag’s pyramid is another. And there are so many interpretations of each one, which gets us back to the problem that we can’t describe stories systemically, even though there are clearly structures.

Mark is interested in computational narratives to understand and explore stories in gameplay; the current context is:

  • Open world games.
  • Interactive fictions.
  • Education and training.
  • Experimental novels.

In these environments we want to be able to present a coherant, consistent story, filled with characters who act with understandible motivfes and personality, while still providing a compelling and entertaining experience.

Mark recommends Hamlet on the Holodeck by Janet H. Murray; while dated, it is a foundational work for a lot of current thinking.

One problem with the goals above is that to have an immersive experience at scale is simply too difficult to implement: simply authoring a large story is expensive and time-consuming, and allowing for all the paths that are created by player interaction is essentially is impossible. And much of it will be wasted as people don’t use much of it. So generative fiction may be a way of solving the problem.

Plots have a logic; cause and effect informs our understanding of a story. Interestingly enough, planning algorithms developed in the field of symbolic AI in the 70s and map well onto the ideas of a plan; you have an end goal and a set of steps, and the AI can solve the problem, and the solutions will map out all the possible paths through the story that can be considered valid.

This doesn’t necessarily make for an interesting or satisfying story, however; we are accustomed to finding certain elements in sotrys - our storytelling affordances might prefer paths with conflict, spectacle, or which trace a dramatic arc, unless we are intending to create something experimental and at odds with our expectations. This takes us into refinements and extensions of algorithms: for example, the Intentional Partial Order Causal Link (described by Riedl and Young), or the Conflict Partial Order Causal Link (Ware, Harrison, Roberts, and Young), both of which re-weight generation in this direction. Yet another different generative approach is to treat stories as a proof.

There are a lot of tools in these areas, which tackle different parts of these problems:

  • Ink provides a Markdown-inspired system for building and testing stories, with links into other game building tools.
  • ChoiceScript provides another mechanism for multi-choice games; it supports the idea of stats and can use a browser as an IDE.
  • Ren’Py is designed for authoring visual novels.
  • Twine - build choose-your-own-adventures.
  • Yarn and Yarn Spinner - adds the ability to build more complex logic and graphics.

How to avoid meetings

maia sauren

Stereotypes tell a story, but that story is neither necessarily true nor complete. Cultures are not homogenous, and people can have friction not because a culture is wrong, but because it’s different - ask cultures vs guess cultures, for example.

Micro-cultures are another division: you perhaps all went to the same school, or are part of a team, or old friends. These micro-cultures can be reinforced by behaviours, but can also be exclusive.

“Whose governance model wins?” is a key question; barriers are created by these choices - that can include the tools used. Someone will be impeded. That’s not always bad - maia gives the example of working on an application that was part of the infrastructure for working on an ebola outbreak, where the imperative was “what do the doctors want?”.

It’s harder to change a relationship over the phone than to maintain one. But even maintenance is hard! A code of conduct is less fragile, and the right time to set one up is before you need it. “How do you want to have arguments?” is a shocking question the first time someone asks it - but it’s important. Do you want to argue in person? On the phone? With an audience? In private? In email?

You need to model the behaviour you want. If you don’t normalise conflict resolution, you normalise unresolved conflict. And there’s no such thing as a small problem over time. Mico-adult! Have small, awkward conversations sooner rather than later. “Adulting: it’s for people who don’t want to cry even more later.”

Want to avoid big crap meetings? Have more small meetings. It’s these only way to do it! Have pre-meeting meetings! Have regular catch-ups; maia likes to touch base and be told what you’ve done that you’re proud of; if you’ve got an answer, you get a sticker.

You need to accept that change is conflict; conflict is not necessarily bad, but it needs resolution, so you need to learn conflict resolution 101. The problem is that conflict resolution 101 varies by culture and micro-culture. Some people don’t want you to make “I” statements and be expected to infer what they should do based on that; sometimes they want a direct instruction. “The hard part is knowing when to use which.”

We’ve heard of unconcious bias - but it’s not always bad. You can use unconcious bias to encourage people to do the thing you need them to do; “I use my male friend’s unconcious bias to get them to come to feminist talks with me.”

Consider just not doing your job! Take a holiday and see who naturally picks up the things that need to be done! Those people have learned and grown!

“There is no such things as diversity overload, there is only filter failture.”

Share